Policy Options for Fighting Deepfakes

Photo Credit: CNN


 By: Conrad Stosz, Guest Columnist

Advances in machine learning are making it easy to create fake videos—popularly known as “deepfakes”—where people appear to say and do things they never did. For example, a faked video of Barack Obama went viral in April in which he appears to warn viewers about misinformation.[i] Falsehoods already spread farther than the truth, and deepfakes are making it cheap and easy for anyone to create fake videos.[ii] When convincing fakes become commonplace, the public will also start to distrust real video evidence, especially when it does not match their biases.[iii] Unfortunately, the technology that enables deepfakes is advancing rapidly.[iv] Deepfakes will become easier to create, and humans will increasingly struggle to distinguish fake videos from real ones. Luckily, there is some hope that algorithms may be able to automatically detect deepfakes. Computer scientists have generally struggled to automate fact checking.[v] However, early research suggests that fake videos may be an exception. [vi]

Policymakers should support this research and encourage social-media sites to put it into practice. Algorithms will never completely solve the problem of misinformation, but they can raise the cost and expertise required to spread convincing fake videos. Policymakers must act or else risk democracy losing its grip on objective truth once and for all.

Broadly, the term “deepfake” refers to audio, imagery, or video that is generated or altered using machine learning.[vii] Machine learning is an area of computer science where algorithms learn patterns from data. For example, a machine-learning algorithm may teach itself to recognize faces by looking at thousands of images of faces, rather than by being explicitly programmed with an understanding of eyes or mouths. Recently, an area of machine learning called “deep learning” has revolutionized video processing by loosely mimicking the way brains process visual information.[viii] In addition to interpreting video data, these algorithms can also generate or edit it.[ix] Computer scientists are starting to automate video-editing tasks that used to take a team of experienced CGI artists.[x]

Deepfakes are dangerous because anybody can make them and do so anonymously. Programmers have created easy-to-use software that allows people with no expertise to create convincing fakes.[xi] Once this software is released, it is almost impossible to stop its spread. For decades, malicious software has proliferated around the world, and governments have failed to contain it.[xii] Additionally, deepfake algorithms can run on home computers, and there is no feasible way to detect when someone is creating a fake video.[xiii]

Deepfakes are also difficult to attribute and deter. By lowering the barrier to entry, deepfake algorithms will lead to more misinformation. Any internet troll or political partisan will be able to easily create a fake video of someone they dislike. In a flood of compelling fakes, observers will struggle to identify or attribute influence operations by foreign governments. Additionally, deepfakes carry little forensically useful information, especially when they are generated by public algorithms using public data. Foreign governments will be able to easily hide deepfakes in plain sight and deflect blame to domestic actors and criminal proxies.

The best way to combat deepfakes is to stop them from spreading. Policymakers should help media sources block deepfakes by supporting research into automatically detecting them. Online, most people get their information from news outlets[xiv] and mainstream social-media sites.[xv] Each of these sites sites controls what information it publishes. If they can detect deepfakes, the sites can block them and stop them from spreading. Research into algorithmically detecting deepfakes has already started. For example, DARPA recently started its Media Forensics program to help identify doctored video.[xvi] Unfortunately, deepfakes are becoming more realistic and better at avoiding detection.[xvii] Fighting deepfakes will be a constant game of cat and mouse, and fake-detection research will need to be continuous.[xviii] Experts do not yet know if it will be possible to reliably spot deepfakes in the long term. [xix] However, better algorithms can still raise the costs of going undetected and hopefully take that capability out of the hands of the general public.

Policymakers should also provide greater incentives for social-media sites to forbid fake political content. Legislating on technology and political speech is fraught with difficulty and civil-liberties concerns. Still, politicians and other public figures can help create a public expectation that social-media platforms block fake political content. This approach has its limits, but it is not useless. In the wake of foreign interference in the 2016 U.S. presidential election, congressional hearings and public outcry have prompted positive changes. Since late 2016, Facebook has prioritized news from trusted sources,[xx] blocked revenue for pages that spread fake news,[xxi] and increased the number of humans involved in moderating content.[xxii] Policymakers should pressure platforms to continue these efforts and to devote more resources to blocking fake videos.

Finally, policymakers should support efforts to make it easy to use the state of the art in deepfake-detection research. They can do this by funding open-source software projects that would make this research available and transparent to everyone. Companies like Facebook and Twitter can afford to hire teams of researchers devoted to keeping their platforms up to date with the latest technology. Independent journalists and smaller websites cannot. Software that anybody can use to scrutinize videos would be a public good. Policymakers should support it with funding and ensure that the tools remain oriented towards the public interest.

Misinformation is a threat to democracy. Deepfakes threaten to erode trust in some of the last sources of objective truth. Policymakers must make high-tech information operations less effective. If they do not, then bad actors, foreign and domestic, will eliminate objective truth and manipulate the public to their own ends. New technology requires new and innovative forms of policy. Policymakers cannot solve this problem by policing speech or criminalizing the math behind deepfake algorithms. Instead, they should fund positive technological trends while also encouraging technology companies to take greater responsibility for their content.

Bibliography

[i] BuzzFeedVideo. “You Won’t Believe What Obama Says In This Video!” YouTube. April 17, 2018. Accessed January 04, 2019. https://www.youtube.com/watch?v=cQ54GDm1eL0.

[ii] Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science 359, no. 6380 (2018): 1146-151. doi:10.1126/science.aap9559.

[iii] Chesney, Robert and Citron, Danielle Keats, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (July 14, 2018). 107 California Law Review (2019, Forthcoming); U of Texas Law, Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21. Available at SSRN: https://ssrn.com/abstract=3213954

[iv] Coldewey, Devin. “Forget DeepFakes, Deep Video Portraits Are Way Better (and Worse).” TechCrunch. June 05, 2018. Accessed January 04, 2019. https://techcrunch.com/2018/06/04/forget-deepfakes-deep-video-portraits-are-way-better-and-worse/.

[v] Hassan, Naeemul, B. M. C. Adair, James T. Hamilton, Chengkai Li, Mark Tremayne, Jun Yang and Cong Yu. “The Quest to Automate Fact-checking.” (2015).

[vi] Melendez, Steven. “Can New Forensic Tech Win War On AI-Generated Fake Images?” Fast Company. April 03, 2018. Accessed January 04, 2019. https://www.fastcompany.com/40551971/can-new-forensic-tech-win-war-on-ai-generated-fake-images.

[vii] Vincent, James. “Why We Need a Better Definition of ‘Deepfake’.” The Verge. May 22, 2018. Accessed January 04, 2019. https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai-manipulation-fake-news.

[viii] Parloff, Roger. “Why Deep Learning Is Suddenly Changing Your Life.” Fortune. Accessed January 04, 2019. http://fortune.com/ai-artificial-intelligence-deep-machine-learning/.

[ix] Rejcek, Peter. “The New AI Tech Turning Heads in Video Manipulation.” Singularity Hub. September 04, 2018. Accessed January 04, 2019. https://singularityhub.com/2018/09/03/the-new-ai-tech-turning-heads-in-video-manipulation-2/.

[x] Oberoi, Gaurav. “Exploring DeepFakes – Hacker Noon.” Hacker Noon. March 05, 2018. Accessed January 04, 2019. https://hackernoon.com/exploring-deepfakes-20c9947c22d9.

[xi] Cole, Samantha. “We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now.” Motherboard. January 24, 2018. Accessed January 04, 2019. https://motherboard.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley.

[xii] Herr, Trey. “Malware Counter-Proliferation and the Wassenaar Arrangement,” in 2016 8th International Conference on Cyber Conflict (CyCon), 2016, 175–90, https://doi.org/10.1109/CYCON.2016.7529434.

[xiii] Oberoi.

[xiv] Shearer, Elisa. “Social Media Outpaces Print Newspapers in the U.S. As a News Source.” Pew Research Center. December 10, 2018. Accessed January 04, 2019. http://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/

[xv] Matsa, Katerina Eva and Elisa Shearer. “News Use Across Social Media Platforms 2018.” Pew Research Center. September 10, 2018. Accessed January 04, 2019.

[xvi] “Media Forensics (MediFor).” Defense Advanced Research Projects Agency. Accessed January 14, 2019. https://www.darpa.mil/program/media-forensics.

[xvii] All Things Considered. “Tracking Down Fake Videos.” National Public Radio. September 25, 2018. Accessed January 04, 2019. https://www.npr.org/2018/09/25/651560073/tracking-down-fake-videos

[xviii] Waddell, Kaveh. “The Impending War Over Deepfakes.” Axios. July 22, 2018. Accessed January 04, 2019. https://www.axios.com/the-impending-war-over-deepfakes-b3427757-2ed7-4fbc-9edb-45e461eb87ba.html

[xix] Cole, Samantha. “There Is No Tech Solution to Deepfakes.” Vice: Motherboard. August 14, 2018. Accessed January 04, 2019. https://motherboard.vice.com/en_us/article/594qx5/there-is-no-tech-solution-to-deepfakes

[xx] Mosseri, Adam. “Helping Ensure News on Facebook is From Trusted Sources.” Facebook Newsroom. January 19, 2018. Accessed January 04, 2019. https://newsroom.fb.com/news/2018/01/trusted-sources

[xxi] Shukla, Satwick and Tessa Lyons. “Blocking Ads From Pages That Repeatedly Share False News.” Facebook Newsroom. August 28, 2017. Accessed January 04, 2019. https://newsroom.fb.com/news/2017/08/blocking-ads-from-pages-that-repeatedly-share-false-news/

[xxii] Fowler, Jeffrey A. “I Fell for Facebook Fake News. Here’s Why Millions of You Did, Too.” The Washington Post. October 18, 2018. Accessed January 04, 2019. https://www.washingtonpost.com/technology/2018/10/18/i-fell-facebook-fake-news-heres-why-millions-you-did-too/?utm_source=reddit.com&utm_term=.d7a087743a8d

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.