Megan Kauffman, Associate Member, University of Cincinnati Law Review
Deep fakes are computer-generated videos that are created to make it appear that a person said or did something that he or she never did. The latest technological advances allow for a creator to impose a figure’s face onto a previously recorded video (face-swapping) or to control a person’s facial expressions and make them say whatever the creator wants. Deep fakes first made main stream media after internet users were using the programs to import celebrity faces into pornographic videos on Reddit. After being banned from the website, the creators released an app called FakeApp, which allows people to easily create deep fakes themselves. In May 2018, a Belgian political party created a video which appeared to show Donald Trump making comments about climate change that were directed towards Belgian citizens. After receiving hundreds of comments trashing the President of the United States, the political party was forced to admit that it had created the digital replication and released it as a way to get people to sign its petition. But imagine if the creator had been a true enemy of the United States or a political opponent who was looking to cause backlash across the country. As of today, the technology used in deep fake creations is not advanced enough to pass through digital screenings, but the creations can appear real enough to a human viewer. These detections allow for websites to identify and pull any deep fakes that have been posted on their sites. At what point should the legislature step in to ensure that the country’s national security is protected against computer generated fake videos of politicians?
Currently there is no legislation that has passed federally or statewide that prohibits the creation of deep fakes. While there have been several proposed bills, nothing has actually been passed in order to enact some safeguard against this type of technology. Legislators face a couple of challenges in banning deep fakes, including a First Amendment issue of freedom of speech and the “fair use” doctrine in copyright law. In the cases of pornographic deep fakes, courts have allowed for individuals to sue under tortious claims, including false light and defamation, and under copyright claims. Courts have recognized, however, that there is a need to balance a plaintiff’s right to privacy and ownership with a defendant’s right to creation under the First Amendment. It will be interesting to see if the courts maintain this balance when the deep fakes involve politics instead of pornography.
There is a discernable national security interest in restricting or banning deep fakes with regards to politics. A foreign or political adversary could use computer generated videos posted on the internet to cause hysteria or dissent. Although technological screenings can determine that the videos are fake, the public might not be able to easily recognize them as forgeries, especially in a landscape where fake news, pictures, and videos are already prevalent and readily believed. However, the Constitutional bar for the restriction of free speech is incredibly high. The government would have to show a compelling governmental interest for legislation passed restricting the creation of political deep fakes and would have to narrowly tailor that legislation to ensure that citizens’ freedom of speech is not being infringed. It is possible that in the future deep fakes could become so technologically advanced that they could be undetected by computer algorithms. If websites are not able to distinguish real videos from fakes, they will not be able to ensure that fake videos are not being spread to the masses. The release of deep fakes that are so seamlessly created to appear real could pose a true threat to the security of the country. Hopefully, by then legislation will have also advanced to combat this p
 John Villasenor, Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth, Brookings (Feb. 14, 2019), available at https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/.
 Oscar Schwartz, You Thought Fake News Was Bad? Deep Fakes are Where Truth Goes to Die, The Guardian (Nov. 12, 2018), available at https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth.
 Villasenor, supra note 1.
 See Brandenburg v. Ohio, 395 U.S. 444, 447 (1969).