Defending the integrity of elections is so important that the U.S. Supreme Court has allowed laws that restrict speech, such as in Burson v. Freeman when limiting campaign speech near polling places. But the Supreme Court also has protected political speech against election-related laws it thought went too far, such as in McIntyre v. Ohio Elections Commission.
The latest election integrity laws drawing First Amendment scrutiny involve the use of deepfakes, which are computer-generated video or audio that is so realistic that viewers may believe a real person said or did something that they did not say or do.
The rise of artificial intelligence, commonly referred to as simply “AI,” to produce deepfakes of political figures and candidates has so concerned state legislatures that many have passed laws making their use around elections illegal. So far the courts have found these laws constitutionally flawed usually concluding they are overly broad in restricting speech or unnecessary in addressing the problem.
Are deepfakes protected by the First Amendment?
The latest generative AI systems is based on Massive Machine Learning (MML), which produces deepfake images and videos that are very hard to distinguish from those that capture actual people and events.
While new technologies have created the recent fake images causing alarm in politics, the issue of outlawing computer-generated images has come up before, namely with sexual images used in pornography.
In 1996 the Congress passed the Child Pornography Prevention Act (CPPA) which criminalized computer-generated images that looked like minors engaging in sexually explicit conduct. The Supreme Court, in Ashcroft v. Free Speech Coalition, struck down key parts of the act as overly broad and limiting of speech protected by the First Amendment. Child pornography is outlawed not on grounds of obscenity, the court reasoned, but because it requires the exploitation of children to make. Therefore, computer-generated images of child pornography was found to be protected free speech.
However, restrictions on deepfakes and other digital manipulations to portray real people in invented sexual situations, by far the most common application, is often illegal for a number of other reasons, including involving copyright and privacy violations. These types of manipulated portrayals in sexual situations have also been specifically outlawed by most state legislatures.
States begin regulating deepfakes in political campaigns in 2019
Deepfakes in political campaigns first became a legal issue in 2019 because of an anonymous deepfake video portraying California Congresswoman Nancy Pelosi as intoxicated. California and Texas soon passed legislation outlawing the creation and dissemination of deepfakes close to elections.
California’s AB-730 allows such audio or visual media as long as it is labeled as fake or as a parody. Texas made no such distinction in its 2019 statute and went further than California by criminalizing political deepfakes within 30 days of an election as a Class A misdemeanor.
By the end of 2024, 20 states had election deepfake laws, two of them outlawing such media material even if it was labeled not authentic. In 2024, there were fewer than 200 cases of political deepfakes reported and no criminal prosecutions. Still, two of the laws became public issues for both political and legal reasons.
California's 2024 law targeted social media distribution of deepfakes
Inspired in part by a deepfake audio manipulation of a Kamala Harris-voiced campaign video, California passed three new laws in 2024 aimed at election interference using deepfakes. The original video was produced by Christopher Kohls and was clearly labeled a parody when posted on X. Elon Musk reposted it without any disclaimers and it received over 129 million views. Governor Gavin Newsom referenced Musk and his reposting when signing the laws that proscribed using deepfakes that were aimed at deceiving voters within 120 days of an election.
The three laws together are called the “Defending Democracy from Deepfake Deception Act of 2024.” AB-2655 requires social media platforms such as X (formerly Twitter before Elon Musk bought it and changed the name) and Facebook to block, or at least label, election-focused deepfakes, and create systems for reporting proscribed content. It allows candidates and elected officials to sue for legal relief (including damages) from these platforms if they violate the law. AB-2839 made the creators and reposters of deceptive election deepfakes legally accountable as well. AB-2355 requires all election focused deepfakes to clearly disclose they are artificially manipulated images, voices or both.
Federal judge ruled parts of deepfake law violated free speech rights
Within hours of the laws being signed, Kohls, who considers his work political satire, filed suit in federal court against California Attorney General Rob Bonta and California Secretary of State Shirley N. Weber and their agents asking for injunctive relief from AB-2839.
While U.S. District Senior Judge John Mendez found parts of the law were constitutional, such as disclaimers before and after audio deepfakes, he thought that many other aspects violated the First Amendment and enjoined them.
“Most of it acts as a hammer instead of a scalpel,” Judge Mendez said of AB-2839, adding that it was “a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.”
“Even if AB 2839 were only targeted at knowing falsehoods that cause tangible harm, these falsehoods as well as other false statements are precisely the types of speech protected by the First Amendment. In New York Times v. Sullivan, the Supreme Court held that even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected,” he wrote.
Minnesota faced challenge over law on spreading political deepfakes
Kohls, with Minnesota state legislator Mary Franson, then filed suit to block Minnesota’s deepfake law in a case against Minnesota Attorney General Keith Ellison. Publicity about the case went viral when it was revealed that one of the state’s experts used ChatGPT, a text-generating AI program, to write his submission to the court.
Minnesota’s 2023 law, 609.771 “Use of Deep Fake Technology to Influence an Election,” is similar to the California statutes in many respects, but goes further in allowing for injunctive relief “against any person who is reasonably believed to be about to violate” the law. This kind of prior restraint on free speech has been rejected by the Supreme Court, most famously in the Pentagon Papers case.
During early proceedings in the Minnesota case, it was revealed by the plaintiffs that one of Minnesota’s experts, Stanford University professor Jeff Hancock who founded the University’s Social Media Lab, had submitted an opinion under penalty of perjury and costing the people of Minnesota $600 an hour that contained at least two articles and one set of authors hallucinated by the Chat GPT-4.0 program he used to write it.
One of the issues with using generative AI text programs is that they can make up things, so-called hallucinations. Hancock claimed the errors were not important, and were the result of using a placeholder “[cite]” for information he was going to add later. The substance of his expensive expert opinion remains valid, he insisted. But the issue caused critics to raise doubts about his arguments overall.
As plaintiff attorney Frank Bednarz remarked, defenders of election deepfake laws claim that extreme limitations on free speech are necessary because “unlike other speech online, AI-generated content supposedly cannot be countered by fact-checks and education.” The example of Hancock’s AI-generated statements to the court prove differently, he said. “[B]y calling out the AI-generated fabrication to the court, we demonstrate that the best remedy for false speech remains true speech—not censorship,” Bednarz said.
Elon Musk sues over law to label, restrict deepfakes on social media
Legislation and court interventions around election and other political deepfakes is likely to increase as technologies improve and concerns about election misinformation remain widespread.
Already, Elon Musk’s X social media company has filed suit challenging California’s new law that requires platforms to block or label deepfakes.
Alexandra Taashman points out in a 2021 Loyola of Los Angeles Law Review article that many deceptive speech acts can already be effectively prosecuted on copyright and other grounds. But a growing political crisis on the internet will likely continue to spur lawmakers to try to limit political speech directly, despite skepticism of the courts.