Democracy in Danger: How AI-Enhanced Disinformation is Threatening Democracy

Soon after the beginning of the Russian invasion of Ukraine, a video of Ukrainian President Volodymyr Zelenskyy circulated on social media, asking Ukrainian soldiers to surrender. It appeared not only on social media, but also on Ukraine’s national television station, Ukraine 24. The entirely faked video demonstrates firsthand the threat which deepfakes pose to democracy. While it has not been officially attributed to any actor, it is highly suspected that Russia attempted to spread the video as part of its online disinformation strategy. While this particular deepfake was quickly debunked by Zelenskyy himself and ultimately taken down, it could be just the “tip of the iceberg,” according to Professor Hany Farid at the University of California, Berkeley, an expert in digital media forensics.

Deepfakes, a form of synthetic media created by AI to produce highly realistic, fake images and videos, have the potential to become extremely effective weapons of disinformation. Deepfakes are created by an AI technique called “deep learning,” a process which attempts to simulate the human brain with software known as a neural network in order to learn from large amounts of data, and the technologies necessary for their creation are publicly available. While previously, high technical knowledge was required for an actor to produce a highly realistic fake video, today, all one needs is a few photographs and an internet connection.

In recent years, global disinformation operations have grown in sophistication and effectiveness. When conducted by authoritarian state actors, they can shape international narratives, target democratic institutions and regimes, and amplify authoritarian messaging. The campaign conducted by Russia during the 2016 United States presidential election is perhaps the most well-known example of the far-reaching effects that a well-run disinformation campaign can have on a democracy—not only did fake news affect the outcome of the presidential election, but it permanently altered the political landscape of the United States. Public distrust in media, belief in propaganda and conspiracy theories, and a lack of confidence in the government have all risen as a result. China, another authoritarian actor, is similarly utilizing disinformation operations to simultaneously expand its own spheres of influence while eroding confidence in and reliance on the United States. China’s disinformation efforts range broadly in topic, from denial of the Uyghur genocide in Xinjiang to the origins of COVID-19. Analysts generally agree that China is not yet utilizing its full array of resources to carry out what is termed its “’people’s war’ on global public opinion.” The threat of China’s disinformation operations increasing and targeting democratic nations such as the United States remains not only possible, but likely.

Democracies are particularly vulnerable to disinformation due to the very foundations of a democratic society, particularly the freedom of speech and information. While disinformation operations are nothing new (indeed, the shape of modern disinformation operations began to form in the early 1920s and were expertly honed during the Cold War), artificial intelligence promises to make them exponentially more difficult to counteract, as seen with the rise of deepfakes. Disinformation specifically exploits our vulnerabilities as consumers of information. We as people seek information which provides us with a reassuring sense of identity. Confirmation bias, the tendency to seek and interpret information which satisfies existing beliefs, becomes even more insidious during times of heightened uncertainty—for example, during conflict. As deepfake technology improves and they become virtually undetectable, it will be increasingly difficult for ordinary citizens to discern what content they see is authentic. Synthetic media has the potential to cause people to believe in and remember falsified experiences and even influence decision making. Numerous studies done on the effectiveness of deepfakes on citizens shows that, at best, people are made to feel uncertain when presented with deepfakes (as opposed to outright believing them)—however, this in turn leads to distrust in media. More recent research done by Professors Hany Farid and Sophie J. Nightingale shows that deepfakes are indistinguishable from authentic imagery, and that AI-generated faces are more trustworthy than real ones. The implications of these findings are sobering when considering the potential for deepfakes to be weaponized. When weaponized, deepfakes hold the potential to shape political discourse, obscure or distort reality, or influence the actions of politically-involved citizens.

Democratic governments have done little to invest in adequate defenses against weaponized deepfakes or other advanced information operations, putting our competitors at an advantage that will only increase. This gap in capabilities and lack of defenses will only become more of a threat the longer that democracies stall in developing sufficient defenses. While deepfakes are still in the early stages of being a mainstream attack vector, that simply means that now is the best time to begin investing in defenses against them. Most efforts aimed at combating deepfakes so far have focused on automated detection. However, these efforts are extremely difficult to scale, and because deepfakes can be mass-produced and mass-distributed, defenses must go further than passive detection.

So far, the United States’ formation of an active response to the deepfake threat has been slow-going. The Department of Homeland Security has been mandated by the Senate since 2019 by the Deepfake Report Act to issue an annual report on the state of digital forgeries, including its use by foreign actors. The Defense Advanced Research Projects Agency (DARPA) is spearheading the United States’ governmental approach to fighting deepfakes. Programs such as Media Forensics (MediFor), which focuses on “leveling the digital imagery playing field” by developing technologies to assess the integrity of an image or video, and Semantic Forensics (SemaFor), which focuses on detection and identification through ‘semantic errors’ such as irregular blinking. Private sector actors such as Google have partnered with DARPA to develop new digital forensics techniques which aim to analyze individuals’ style of speech and body language; however, the success of such techniques may prove to be temporary, as AI-based algorithms continue to improve rapidly.Defending against deepfakes will require a forward-facing approach. It is important to note that there are significant downsides to developing deepfake technologies for defensive purposes, namely the potential for them to backfire and harm the United States’ legitimacy. However, focusing on benign development for research and defensive purposes while actively working to mitigate potential unintended effects would present a balanced solution. Improving our understanding of how deepfakes are created and utilized will aid not only in our own cultivation of an emerging capability but will allow us to further harden our defenses. Integrating a research-focused development program into a more holistic defense plan will enable the US to increase its readiness in the case of weaponized deepfakes targeting the United States.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.