Though private businesses and not government entities, U.S. social media platforms have nonetheless been at the center of a number of free speech disputes.
Social media is a method of internet-based communication in which users create communities and share information, videos and personal messages with each other. Some of the most popular social media platforms include Facebook, YouTube, Twitter, Instagram, TikTok and SnapChat.
Some features of social media include creating accounts, sharing information, following, liking and blocking other users, and creating some customization of what you see.
Participation in social media generally involves agreeing to follow a platform’s rules, which can have consequences for personal free expression.
In 2017, the U.S. Supreme Court in 2017 called social media “the modern public square” and noted in Packingham v. North Carolina that these “websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.”
Courts have examined:
- The extent of government’s ability to limit speech of government employees or public school students on social media;
- Whether a government official or government entity with a social media account can block, under the public forum doctrine, others from seeing posts or responding to posts on those accounts;
- Whether states can pass laws that regulate or restrict a social media company’s content moderation activities, such as when a social media company chooses to remove individual posts or entire user accounts;
- Whether social media companies can be held liable when their algorithms recommend content by terrorist groups to others; and
- What level of proof is necessary to show that threats published or sent on social media can result in a criminal conviction of stalking.
In addition, some have warned that the use of disguised social media accounts by Russia, China and others to spread disinformation and sow division and confusion among Americans is a new type of “information warfare” that threatens national security and warrants some type of government intervention. Others counter that such intervention could lead to improper government censorship and limit the right of Americans to receive information through social media platforms.
Court strikes down law barring sex offenders from using social media
In one of the first cases involving social media to reach the Supreme Court, the Court invalidated a North Carolina law that prohibited convicted sex offenders from accessing social media websites over concern that they would use the communications medium to cultivate new victims.
The court explained in Packingham that the North Carolina law was overbroad and would criminalize a wide range of legal speech activities. The decision emphasized the free-speech capabilities of social media, similar to how the Court viewed the speech-enhancing capabilities of the Internet in Reno v. ACLU (1997).
Public employees, students have been disciplined for social media posts
Social media communications have spurred First Amendment issues in public employee speech cases. Many government employees have faced discipline for Facebook posts about their bosses, co-workers, or students or for comments related to core functions about their job that the employers viewed as inappropriate.
For example, the 6th U.S. Circuit Court of Appeals in December 2022 upheld the termination of a Maryville, Tenn., police officer over Facebook posts that were critical of the county sheriff. The court reasoned that the local police department had an interest in maintaining a good working relationship with the sheriff’s department and this trumped the officer’s free-speech rights.
Student speech cases have also arisen out of social media posts.
One of the more pressing questions in First Amendment law is how much power school officials have to regulate students’ off-campus communications on social media sites.
For example, in Bell v. Itawamba School District (2015), the 5th U.S. Circuit Court of Appeals determined that public school officials could punish a student for a rap song he created off-campus and posted on Facebook and YouTube. The video referenced two teachers at the school who allegedly had engaged in sexually inappropriate behavior with female students.
However, in Mahanoy Area School District v. B.L. (2021), the U.S. Supreme Court said that a cheerleader’s vulgar post to Snapchat after not making the varsity squad did not pass the substantial disruption test and the student’s free speech rights protected her from school discipline.
Government social media accounts can create a public forum
Government officials routinely use social media to communicate policy, advocate positions, introduce new legislation and for other communication.
However, once a government entity or government official creates a forum that allows people to comment on posts, the government may run into First Amendment hurdles if the entity or official tries to shut down or silence opposing viewpoints.
In 2019, the 2nd U.S. Circuit Court of Appeals ruled in Knight First Amendment Institute v. Trump (2019), that President Donald Trump violated the First Amendment by removing from the “interactive space” of his Twitter account several individuals who were critical of him and his governmental policies.
The appeals court agreed with a lower court that the interactive space associated with Trump’s Twitter account, “@realDonaldTrump,” is a designated public forum and that blocking individuals because of their political expression constitutes viewpoint discrimination.
The U.S. Supreme Court in April 2021 vacated the decision and sent it back to the 2nd Circuit with instructions to dismiss it for mootness — Trump was no longer president. Also, after the Jan. 6, 2021, attack on the U.S. Capitol, Twitter had eliminated his account over concern that his comments were being interpreted as encouragement to commit violence. He was barred from using the platform anymore.
In another case, the 4th U.S. Circuit Court of Appeals in Davison v. Randall (2019) found that a Virginia county official created a public forum with her Facebook page. In this case, Phyllis Randall, chair of the Loudon County Board of Trustees, removed one of her constituents Brian Davison from her Facebook page.
The court ruled that the page was a public forum and blocking Davison, who had posted comments about corruption on the county official’s page, “amounted to viewpoint discrimination,” violative of the First Amendment
Circuit courts split over states regulating social media content
An emerging issue related to social media is how far companies can go in removing content and users. Some Republicans have complained that posts from conservative leaders and journalists are being blocked or removed or their accounts removed because of their posts, similar to what happened to Trump.
Two states, Florida and Texas, passed similar, though not identical, laws to reduce such blocking, saying the social media companies were discriminating by not allowing certain people to use their platforms based on their political views.
Trade associations for the social media companies sued the states over the laws, arguing that content moderation, which included removing posts or users, was a form of editorial judgment and protected by the First Amendment as their own speech. Just like the government can’t force a newspaper to publish something, the government can’t force social media companies to allow certain content on their platforms, they argued.
In 2022, two U.S. Circuit Courts reached different conclusions related to a state’s ability to pass laws regulating a social media company’s content-moderation activities.
In NetChoice v. Attorney General of Florida the 11th Circuit Court of Appeals upheld an injunction preventing the Florida law from going into effect, saying the “Stop Social Media Censorship Act” would likely be found to violate the First Amendment. The Florida law sought to prohibit social media companies from “deplatforming” political candidates, from prioritizing or deprioritizing any post or message by or about a candidate and from removing anything posted by a “journalistic enterprise” based on content.
The 11th Circuit held that social media companies are private enterprises that have a First Amendment right to moderate and curate the content they disseminate on their platforms.
A few months later, the 5th U.S. Circuit Court of Appeals took an opposite view of the similar Texas law and vacated a preliminary injunction by a local court that had prevented it from being enforced.
In NetChoice v. Paxton, the court supported Texas’s view that social media companies function as “common carriers,” like a phone company, and as such, they can be regulated with anti-discrimination laws. The court said it rejected “the idea that corporations have a freewheeling First Amendment right to censor what people say” on their platforms.
The Texas law “does not regulate the (p)latforms’ speech at all; it protects other people’s speech and regulates the (p)latform’s conduct,” the court said.
The laws are on hold while both cases have been appealed to the U.S. Supreme Court, which has not yet granted or denied review.
Can social media companies be liable for recommending terrorist content?
In another set of cases, the Supreme Court is examining:
- Whether Section 230 of the Communications Decency Act that shields internet service providers from being legally liable for user-created content also shields them when their algorithms promote or recommend content to others; and
- Whether a social media company provides substantial assistance to terrorists by allowing them to operate on their platforms and can be held liable for aiding and abetting them in their attacks by not taking more aggressive action to remove their content from their platforms.
The Supreme Court heard arguments in February 2023 in Gonzalez v. Google related to the terrorist videos on YouTube, which Google owns, and in Twitter v. Taamneh. In both cases, families of Americans killed in ISIS attacks say that by allowing ISIS to post videos and other content to communicate the terrorists’ messages and to radicalize new recruits, the social media platforms aided and abetted terrorist attacks that killed their relatives, allowing the families to seek damages under the Anti-Terrorism Act.
The 9th U.S. Circuit Court of Appeals sided with social media in one case, but against the companies in the other.
In the Google case, the family of a 23-year-old American student killed in an ISIS attack in Paris in 2015 says that YouTube is responsible for aiding the terrorists by recommending ISIS videos to users through its algorithms, helping ISIS recruit terrorists.
In the Twitter case, the U.S. family of a Jordanian citizen killed in an ISIS attack in Istanbul in 2017 argues that Twitter and other tech companies knew that their platforms played a role in ISIS terrorism efforts but failed to keep ISIS content off their platforms.
Some free speech advocates have argued in amicus briefs that the recommendation algorithms are crucial to free speech, and without them, it would become virtually impossible to search the internet and the social media platforms would lose much of their value as forums for speech.
The Supreme Court is expected to issue its decisions in these cases later in 2023.
Courts have examined online stalking, ‘true threats’ on social media
Online stalking also has been at the center of First Amendment cases in which courts have had to decide whether repeated unwanted and threatening communications to a person over social media are “true threats” and unprotected by the First Amendment.
In legal parlance, a true threat is a statement that is meant to frighten or intimidate one or more specified persons into believing that they will be seriously harmed by the speaker or by someone acting at the speaker’s behest.
Courts have divided over the standard of proof of intent in cases that involved threats over social media. Is it sufficient to show that a reasonable person would consider a person’s words a threat, or must it be proven that the person sending the messages knew or intended his words to be threatening?
In Elonis v. United States (2015), the U.S. Supreme Court reversed a criminal conviction under a federal stalking statute because the jury was instructed that it only had to show a reasonable person would view speech as threatening and not consider the mental state of the speaker. The Court did not, however, state what standard of proof was necessary to determine the speaker’s intent. Must it be objective, a reasonable person considering the facts and context? Or must it be subjective, proving a person’s understanding of the effect of the messages when sending them?
In April 2023, the U.S. Supreme Court in April 2023 heard oral arguments in an appeal of a Colorado Court of Appeals decision in which the court upheld a stalking conviction of a man who had sent threatening messages over Facebook for two years to a local musician. The man, Billy Raymond Counterman, contends that his messages were not explicitly threatening and that his conviction should be overturned because the state’s standard of proof for intent is too low.
This article was published in April 2023 and has been updated periodically since then. Deborah Fisher is the director of the Seigenthaler Chair of Excellence in First Amendment Studies at Middle Tennessee State University. Parts of this article were contributed by David L. Hudson, Jr., a law professor at Belmont who publishes widely on First Amendment topics.