“I can’t believe my eyes” has long been an expression of surprise. In the age of deepfakes, it’s time to take it literally.
Deepfakes are videos or images that mislead viewers through manipulation and misinformation. The ongoing expansion of the use of artificial intelligence has fueled fake videos, leading to public outrage and calls for legislation.
Congress and state legislatures have proposed, and in some states, passed laws that would limit the use of deepfakes and impose penalties for misuse. The unauthorized and manipulated use of likenesses in pornography has been a particular source of concern.
However, despite their misleading nature, deepfakes are protected under the First Amendment as a form of free expression. Deepfakes are essentially lies, which, without criminal behavior, are protected as free speech. Falsehoods are protected in part because giving the government the authority to determine truth or falsity would largely negate freedom of speech.
The dangers of deepfakes
The dangers of deepfakes include:
- The manipulation of the public into believing something that is not true. This could have dramatic social and political consequences.
- Those political consequences could undermine American society, government and democracy itself.
- Viewers leery of deepfakes and manipulated content are less likely to trust credible media because they no longer know which sources to believe.
- Deepfakes can victimize individuals by creating the impression that they engaged in activities they never would.
The First Amendment does not, however, offer blanket protection to deepfakes. Crimes can still be punished even if they’re committed using free expression.
Remedies against the wrongful use of deepfakes
The use of deepfakes to con someone into providing funds can be prosecuted as fraud. Manipulated videos used to blackmail someone would be evidence of extortion. A pattern of deepfakes targeting an individual may be harassment.
In addition to criminal prosecution, there are also civil remedies that can be used to combat wrongful deepfakes.
A libel suit can be filed if a deepfake damages someone’s reputation by portraying him or her in a false light.
If a deepfake creator doesn’t own the images in the video, there’s potential liability for copyright infringement.
There are also state laws regarding privacy and publicity that bar the unauthorized use of someone’s name or likeness.
While AI is relatively new, media law is not and there are decades of court decisions governing content. Being free to speak doesn’t mean a corollary right to wrongfully damage reputations, steal property, invade privacy or otherwise harm others.
Artificial intelligence is not one of those technological advances that society clamored for. (We’re still waiting for those jetpacks.) Instead, there’s a sense that AI is happening to us and our only option is to brace ourselves. There was a similar sense in the mid-‘90s as “the information superhighway” rolled out.
In time, though, fears typically give way to an appreciation for the positive things an emerging technology can bring to our lives. Yes, AI is generating a font of deceitful, manipulative and damaging posts. But in time, it will enhance our ability to share content and communications with others in an even more creative, ambitious and economical way. Free speech and technology are a potent combination.
Ken Paulson is the director of the Free Speech Center and a dean emeritus at Middle Tennessee State University.
The Free Speech Center newsletter offers a digest of First Amendment- and news media-related news every other week. Subscribe for free here: https://bit.ly/3kG9uiJ