**# Propaganda in the Digital A
Jason Cameron
John Cade
HSP3U
William Lyon Mackenzie CI
January 13, 2025
The 2016 U.S. Presidential election exposed the unprecedented power of digital propaganda. The spread of fake news and manipulated content through social media significantly influenced public opinion, underscoring how the internet has transformed propaganda from a tool wielded by governments into a decentralized, user-driven force (Nyhan, 2020). By 2024, advanced technologies like deepfakes, AI-generated content, and bot armies have further blurred the lines between truth and fiction. This essay explores the evolution of propaganda in the digital age, examines its impact on democratic discourse, and discusses strategies to counter misinformation and protect democratic processes.
The internet and social media have structurally changed the nature of propaganda from centralized, state-controlled messaging to decentralized, user-driven dissemination (Nieubuurt, 2021). Propaganda in the past was a top-down tool of governments and institutions, from the patriotic posters of WWII to state-controlled media in Cold War authoritarian regimes (Cambridge University Press, 2020). These traditional forms were limited in reach and required significant resources to produce and distribute (Leetaru, 2019). In contrast, digital propaganda thrives in the online ecosystem, where anyone with an internet connection can create and share content (Leetaru, 2019). Memes, viral posts, and user-generated videos have become the new vehicles for propaganda, allowing individuals and groups to bypass traditional gatekeepers and directly influence public opinion (“Social Media Manipulation by Political Actors an Industrial Scale Problem - Oxford Report,” 2021). This democratization of propaganda has made it more accessible and chaotic: Potentially unintended misinformation shared by well-meaning people fuzz the line between deliberate propaganda and genuine error. Nature 2024 For example, during the COVID-19 pandemic, uninformed but well-intentioned health advice spread via social media often drowned out official guidance (Samy & Abdelmalak, 2020). This shift complicates efforts to regulate and combat propaganda, as it is no longer confined to state actors or easily identifiable sources. Instead, it emerges from a vast, decentralized network of users, making it harder to track, control, and counteract (Lee, 2024).
Memes, deepfakes, and bots have emerged as the definitive tools of contemporary propaganda, given the swift propagation of a manipulated narrative in rapid succession. An example would be memes; simplifying dense and complex socio-political discourse into digestible visual quips can make these forms highly transmissible and accessible. Still, this has the potential for great oversimplification and distorting of the truth. For example, during the 2024 U.S. presidential election, memes that mocked candidates like Kamala Harris and Donald Trump went viral in a split second, usually playing on stereotypes or spreading false narratives while masquerading as humor (Merica & apnews, 2024; Nieubuurt, 2021). While memes might be a means of engaging people, their viral and emotional appeal makes them effective vehicles for propaganda since they bypass critical thinking and directly appeal to cognitive biases. Deepfakes, by contrast, are a more sophisticated and insidious variety of digital propaganda.
These AI videos and audio clips can create hyper-realistic but completely fabricated content, such as the 2024 deepfake of President Biden urging Democrats not to vote in the New Hampshire primary (Bond, 2024). The consequence of deepfakes is the erosion of trust in visual and auditory media; one can no longer depend on one’s senses to know facts from fiction. This, in turn, has deep implications for democracy, where the loss of trust is directly proportional to the undermining of the credibility of information, perpetuating further skepticism and confusion. Finally, there are the bots-automated accounts created to take on human-like characteristics/interactions that help to magnify propaganda. These accounts can flood social media platforms with repetitive messages, creating the illusion of widespread support or consensus.
During the COVID-19 pandemic, a good portion of misinformation came from bots; studies have shown that as many as 66% of bot activity was COVID-19-related (Himelein-Wachowiak, 2021). The mass retweeting, liking, and sharing by bots manipulate public opinion and shape narratives, and users often think that they are engaging with other humans (Himelein-Wachowiak, 2021). The use of these tools raises serious ethical concerns, as they not only spread misinformation but also erode trust in institutions, media, and even interpersonal relationships, potentially destabilizing social cohesion. Memes, deepfakes, and bots not only circulate misinformation but also dissolve trust in institutions, media, and even interpersonal relations. As these means become more available and sophisticated, the challenge of distinguishing truth from falsehood intensifies and threatens the very foundation of democratic discourse (Lundberg & Peter Mozelius, 2024).
Social media algorithms and echo chambers have made propaganda more pervasive and personalized, reshaping how information is consumed. Algorithms favor engaging content, which often reinforces users’ existing beliefs and preferences. For example, Facebook and YouTube use engagement-driven systems that curate content to match users’ interests, creating echo chambers where individuals are exposed primarily to like-minded perspectives (Cambridge University Press, 2020). This has exposed users to more propaganda since they are less likely to encounter opposing ideas or evaluate them critically (Nature, 2023).
These algorithms also inadvertently amplify falsehoods and fringe content. For instance, YouTube’s recommendation algorithm has been criticized for promoting misleading videos to maximize watch time, irrespective of the truthfulness of the content (Shin, 2024). This not only spreads misinformation but also erodes trust in credible sources, as users find it increasingly difficult to differentiate between fact and fiction (Springer, 2023).
Implications are deep. Algorithm-based reinforcement of already existing assumptions and restricted exposure to different viewpoints stifle critical thinking and have a polarizing effect on societies (Lee, 2024). In this atmosphere, propaganda wins, and constructive conversation and mutual comprehension become almost impossible. All this can be remedied by having more algorithmic transparency, media education, and practices that make accuracy and diversity outweigh engagement on a platform (Oxford Internet Institute, 2021;Shin, 2024).
Digital propaganda poses a greater risk to democracy and public discourse, both of which are endangered. Propaganda shapes public opinion and election results because it spreads false stories like the ones that inundated the 2016 U.S. election with fake news stories and targeted ads that changed voter behavior (NPR, 2024). The deepfakes complicate this further because fabricated videos of political leaders further lower confidence in electoral processes (Nature, 2024). Furthermore, the propagation of false news results in distrust of institutions and media as consumers cannot discern which information comes from reliable sources or is doctored (Springer, 2023).
Perhaps the most difficult job has been that of digital propaganda regulation since any attempts at checking misinformation are necessarily weighed against protecting free speech. For instance, Facebook introduced fact-checking labels, yet most of such policies are either overlooked or attacked as partisan (Lee, 2024). The broader societal implications are very disturbing: weakening democratic processes, increased polarisation, and the rise of populism fuelled by distrust in traditional institutions (Samy & Abdelmalak, 2020). These are complex issues requiring balanced responses in transparency, accountability, and public education towards regaining trust and protection of democratic values.
Addressing the challenge of digital propaganda requires a multifaceted approach involving individuals, platforms, and governments. Education will play a very important role in promoting digital literacy and critical thinking, which can empower users to identify and resort to misinformation. For instance, programs such as Finland’s national media literacy program have been successful in equipping citizens with the necessary skills to navigate the digital information landscape (Lee, 2024). Social media companies have also acted: Facebook has partnered with fact-checking organizations and Twitter has used AI to identify and remove bot networks. These efforts are limited in their capacity, however, as users mostly ignore the warning labels, and malicious actors continuously adapt to avoid detection (Nature, 2024).
The regulatory response to this issue has been, for instance, the Digital Services Act of the EU, which holds platforms accountable for the spread of damaging content while being sensitive to issues of free speech. The effectiveness of these laws depends on the enforcement standards and coordination with other countries around the world (Oxford Internet Institute, 2021). Overall, efforts to eradicate digital propaganda depend on group efforts. Society can create a better-informed and more resilient online community through education, technological innovativeness, and regulation that will be able to resist the spread of propaganda and defend democratic values.
The Internet and social media era turned propaganda into something even more pervasive, personal, and profoundly sophisticated. From a change in the very nature of creating propaganda to the rise of tools such as memes, deepfakes, and bots, digital technology has dramatically changed how information is manipulated and distributed (Lundberg & Peter Mozelius, 2024; Nieubuurt, 2021). Algorithms and echo chambers further amplify those effects, leading to polarization in societies and the erosion of trust in institutions (Cinelli, 2021). Therein, the impact on democracy and public discourse is undeniable, where propaganda undermines election integrity and nurtures broad skepticism of information (Bond, 2024).
It demands multi-level efforts to overcome these challenges. While education should arm people with digital literacy and critical thinking, platforms will have to be more transparent and accountable in their work of combating misinformation. And governments, too, should have a role in legislating regulations that balance oversight with protection for free speech. Fundamentally, fighting digital propaganda requires common action. People, platforms, and governments acting together can forge a more aware, more robust virtual community that shields the integrity of public discourse and the very roots of democracy.
References
Bond, S. (2024, December 21). How deep fakes and AI memes affected global elections in 2024. NPR. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
Cinelli, M. (2021). The echo chamber effect on social media. The echo chamber effect on social media. https://www.pnas.org/doi/10.1073/pnas.2023301118
Guess, A. M., & Lyons, B. A. (2020). Misinformation, Disinformation, and Online Propaganda. Cambridge University Pres. https://www.cambridge.org/core/books/social-media-and-democracy/misinformation-disinformation-and-online-propaganda/D14406A631AA181839ED896916598500
Himelein-Wachowiak, M. (2021, 05 20). Bots and Misinformation Spread on Social Media: Implications for COVID-19. Bots and Misinformation Spread on Social Media: Implications for COVID-19. https://pmc.ncbi.nlm.nih.gov/articles/PMC8139392/
Lee, C. (2024, August 29). Social Media Manipulation in the Era of AI. RAND. Retrieved January 13, 2025, from https://www.rand.org/pubs/articles/2024/social-media-manipulation-in-the-era-of-ai.html
Leetaru, K. (2019, 05 6). Is Digital Age Propaganda Fundamentally Different From That Of Past Eras? Forbes. https://www.forbes.com/sites/kalevleetaru/2019/05/06/is-digital-age-propaganda-fundamentally-different-from-that-of-past-eras/
Lundberg, E., & Peter Mozelius. (2024, 08 23). The potential effects of deepfakes on news media and entertainment. Springer Nature, 2024(2024). https://link.springer.com/article/10.1007/s00146-024-02072-1#citeas
Merica, D., & apnews. (2024, 07 21). AI is helping shape the 2024 presidential race. But not in the way experts feared. AI is helping shape the 2024 presidential race. But not in the way experts feared. Retrieved 1 10, 2025, from https://apnews.com/article/artificial-intellgence-memes-trump-harris-deepfakes-256282c31fa9316c4059f09036c70fa9
Nieubuurt, J. T. (2021, 01 14). Internet Memes: Leaflet Propaganda of the Digital Age. University of Maryland Global Campus Frontiers, 5(2020), 20. https://www.frontiersin.org/journals/communication/articles/10.3389/fcomm.2020.547065/full
Nyhan, B. (2020, sept 22). Exposure to untrustworthy websites in the 2016 U.S. election. PubMed Central. Retrieved January 13, 2025, from https://pmc.ncbi.nlm.nih.gov/articles/PMC7239673/
Samy, M., & Abdelmalak, R. (2020, 04 9). Social media as a source of medical information during COVID-19. National Library of Medicine. https://pmc.ncbi.nlm.nih.gov/articles/PMC7482858/
Shin, D. (2024). Artificial Misinformation: Exploring Human-Algorithm Interaction Online. Springer Nature Switzerland, Imprint: Palgrave Macmillan. https://link.springer.com/chapter/10.1007/978-3-031-52569-8_2#citeas
Social media manipulation by political actors an industrial scale problem - Oxford report. (2021, January 13). University of Oxford. https://www.ox.ac.uk/news/2021-01-13-social-media-manipulation-political-actors-industrial-scale-problem-oxford-report
Appendix: Survey Results
How often do you encounter content on social media that you believe is propaganda?
-
Very often: 12%
-
Often: 24%
-
Occasionally: 42%
-
Rarely: 19%
-
Never: 3%
Do you think memes are an effective way to spread propaganda?
-
Strongly agree: 26%
-
Agree: 38%
-
Neutral: 21%
-
Disagree: 11%
-
Strongly disagree: 4%
Have you ever come across a deepfake or manipulated video?
-
Yes, and I recognized it as fake at the time: 18%
-
Yes, but I didn’t realize it was fake until later: 14%
-
No, I’ve never encountered one: 50%
-
I’m not sure: 18%
How do you usually verify the authenticity of information you see online? (Average Ranking)
-
Checking multiple sources
-
Consulting trusted news outlets
-
Looking for fact-checking labels
-
Asking friends or family
-
I don’t verify information
Do you believe social media algorithms influence the type of propaganda you are exposed to?
-
Strongly agree: 41%
-
Agree: 36%
-
Neutral: 16%
-
Disagree: 6%
-
Strongly disagree: 1%
In your opinion, how significant is the impact of propaganda on public opinion today?
-
Very significant: 33%
-
Significant: 45%
-
Neutral: 15%
-
Not very significant: 5%
-
Not significant at all: 2%
Are you aware of any efforts by social media platforms to combat propaganda?
-
Yes, and I think they are effective: 9%
-
Yes, but I think they are ineffective: 41%
-
No, I’m not aware of any efforts: 29%
-
I’m not sure: 21%
How confident are you in your ability to identify propaganda when you see it?
-
Very confident: 14%
-
Confident: 38%
-
Neutral: 30%
-
Not very confident: 13%
-
Not confident at all: 5%
Do you think digital propaganda poses a greater threat to democracy than traditional propaganda?
-
Strongly agree: 36%
-
Agree: 40%
-
Neutral: 17%
-
Disagree: 5%
-
Strongly disagree: 2%
Have you ever changed your opinion on a topic after encountering propaganda online?
-
Yes, multiple times: 15%
-
Yes, once or twice: 38%
-
No, never: 32%
-
I’m not sure: 15%
What role do you think individuals should play in combating the spread of propaganda? (Summary of Responses)
-
“Individuals should fact-check before sharing information.”
-
“Educate others about misinformation.”
-
“Report content that seems suspicious.”
How important do you think education is in helping people recognize and resist propaganda?
-
Very important: 47%
-
Important: 39%
-
Neutral: 11%
-
Not very important: 2%
-
Not important at all: 1%
What measures do you think would be most effective in regulating or controlling the spread of propaganda online? (Average Ranking)
-
Public education campaigns
-
Improved content moderation by platforms
-
AI detection tools
-
Better laws and regulations
-
Other (e.g., increased personal accountability)**