Home » Articles » Topic » Media » Social Media

George W. Truett

Several First Amendment cases have arisen over the use of social media. Courts have examined how far the government can go in regulating speech on social media without violating the First Amendment and the liability of social media companies in spreading terrorist content.

Though they are private businesses and not government entities, U.S. social media platforms have nonetheless been at the center of a number of free speech disputes.

Social media is a method of internet-based communication in which users create communities and share information, videos and personal messages with each other. Some of the most popular social media platforms are Facebook, YouTube,  X (formerly called Twitter), Instagram, TikTok and Snapchat.

Users of social media can create accounts, share information, and follow, like and block other users. Social media companies that control the platforms can create some customization of what you see through algorithms.

A user has to agree to a platform’s rules, which often allow the social media company to remove or block accounts. 

First Supreme Court speech case involving social media was 2017

The first free speech case to reach the Supreme Court that involved social media was in 2017 when the court struck down a state law prohibiting a convicted sex offender from using the platforms. Noting the power of social media, the court called it "the modern public square" and said that these “websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.”

Since then the Supreme Court has examined, or is in the process of examining:

  • When a government official or government entity with a social media account can block, under the public forum doctrine, others from seeing posts or responding to posts on those accounts;
  • Whether states can pass laws that regulate or restrict a social media company’s content moderation activities, such as when a social media company chooses to remove individual posts or entire user accounts;
  • Whether the federal government is violating free speech rights when it works with social media companies to remove posts that it considers disinformation or a national security threat;
  • Whether social media companies can be held liable when their algorithms recommend content by terrorist groups to others; and
  • What level of proof is necessary to show that threats published or sent on social media can result in a criminal conviction of stalking.

Some have warned that the use of disguised social media accounts by Russia, China and others to spread disinformation and sow division and confusion among Americans is a new type of “information warfare” that threatens national security and warrants some type of government intervention. Others counter that such intervention could lead to improper government censorship and limit the right of Americans to receive information through social media platforms.

Other cases have arisen in lower courts over the extent of the government's ability to limit the speech of government employees or public school students on social media.

Court strikes down law barring sex offenders from social media

In its first Supreme Court case involving social media in 2017, the court considered a North Carolina law intended to restrict sex offenders from cultivating new victims on social media platforms.

Authorities had charged Lester Packingham under the law after he posted a message on Facebook thanking God after the dismissal of a traffic ticket. The problem for Packingham was that he had been convicted of taking indecent liberties with a 13-year-old when he was a 21-year-old college student and was a convicted sex offender.

The Supreme Court overturned lower courts who had upheld the charges against Packingham. It explained in Packingham v. North Carolina that the law was overbroad and would criminalize a wide range of legal speech activities.  

Can government officials block users, delete comments?

Government officials routinely use social media to communicate policy, advocate positions, introduce new legislation and for other communication.

However, once a government entity or government official creates a forum that allows people to comment on posts, the government may run into First Amendment hurdles if the entity or official tries to shut down or silence opposing viewpoints.

In 2024, the Supreme Court looked closer at when a government official might be violating free speech rights when he or she deleted comments of users or blocked them.  In Lindke v. Freed, the court established a new test to determine when such an official was engaging in state action versus a private action. The court explained that a government official engages in state action on social media if (1) he or she had “actual authority to speak on behalf of the State on a particular matter,” and (2) if he or she “purported to exercise that authority in the relevant posts.” 

Justice Amy Coney Barrett, in explaining the test, said that “the line between private conduct and state action is difficult to draw.” But she noted that “the distinction between private conduct and state action turns on substance, not labels.”

A few years earlier in 2021, another case had reached the Supreme Court — this one involving president Donald Trump and his Twitter account.  In 2019, the 2nd U.S. Circuit Court of Appeals had ruled in Knight First Amendment Institute v. Trump (2019), that Trump violated the First Amendment by removing from the “interactive space” of his Twitter account several individuals who were critical of him and his governmental policies.    

The appeals court agreed with a lower court that the interactive space associated with Trump’s Twitter account, “@realDonaldTrump,” is a designated public forum and that blocking individuals because of their political expression constitutes viewpoint discrimination.

The Supreme Court in April 2021 vacated the decision and sent it back to the 2nd Circuit with instructions to dismiss it for mootness — Trump was no longer president. Also, after the Jan. 6, 2021, attack on the U.S. Capitol, Twitter had eliminated his account over concern that his comments were being interpreted as encouragement to commit violence. He was barred from using the platform anymore.

Supreme Court remands social media regulation back to states

An emerging issue is how far social media companies themselves can go in removing content and users. Some Republicans have complained that posts from conservative leaders and journalists are being blocked or removed or their accounts removed because of their posts, similar to what happened to Trump.

Florida Gov. Ron DeSantis speaks at Miami’s Freedom Tower, on Monday, May 9, 2022. A Florida law intended to punish social media platforms like Facebook and Twitter for blocking or removing conservative viewpoints is an unconstitutional violation of the First Amendment, a federal appeals court ruled Monday, May 23, 2022, dealing a major victory to companies who had been accused by DeSantis of discriminating against conservative thought. (AP Photo/Marta Lavandier, File)

Two states, Florida and Texas, passed similar, though not identical, laws to reduce such blocking, saying the social media companies were discriminating by not allowing certain people to use their platforms based on their political views.

Trade associations for the social media companies argued that content moderation, which included removing posts or users, is a form of editorial judgment and protected by the First Amendment as their own speech. Just like the government can’t force a newspaper to publish something, the government can’t force social media companies to allow certain content on their platforms, they argue.

The Supreme Court took the case after two U.S. Circuit Courts in 2022 reached different conclusions related to a state’s ability to pass laws regulating a social media company’s content-moderation activities.

In NetChoice v. Attorney General of Florida the 11th Circuit Court of Appeals upheld an injunction preventing the Florida law from going into effect, saying the “Stop Social Media Censorship Act” would likely be found to violate the First Amendment. The Florida law sought to prohibit social media companies from “deplatforming” political candidates, from prioritizing or deprioritizing any post or message by or about a candidate and from removing anything posted by a “journalistic enterprise” based on content.

The 11th Circuit held that social media companies are private enterprises that have a First Amendment right to moderate and curate the content they disseminate on their platforms.

A few months later, the 5th U.S. Circuit Court of Appeals took an opposite view of the similar Texas law and vacated a preliminary injunction by a local court that had prevented it from being enforced.

In NetChoice v. Paxton, the court supported Texas’s view that social media companies function as “common carriers,” like a phone company, and as such, they can be regulated with anti-discrimination laws. The court said it rejected “the idea that corporations have a freewheeling First Amendment right to censor what people say” on their platforms.

The Supreme Court vacated and remanded the judgements Moody v. NetChoice (2024), a consolidation of the two cases. Justice Elena Kagan emphasized that NetChoice had to show a substantial of the laws' applications are unconstitutional to succeed on a facial challenge. She relied on a key precedent in 

Miami Herald Publishing Co. v. Tornillo in which the Supreme Court invalidated a law that forced a newspaper to give a political candidate a right to reply when the newspaper attacked or criticized a candidate. The court said the law violated the newspaper's right to editorial discretion.

Court will consider federal coercion to remove social media posts

Another case  regarding government control of social media sites is under consideration by the Supreme Court in 2024. This case stems from lawsuits by the states of Missouri and Louisiana alleging that federal officials and agencies coerced social media companies to remove certain posts because they believed the posts were disinformation or could harm national security.

The court in Murthy v. Missouri  will consider whether the actions by officials in the Biden administration were enough so that the removal of posts could be considered a state action. Stated another way, the case concerns whether the social media companies — because of the “encouragement” and alleged coercion by the federal government agencies — engaged in state action sufficient to trigger constitutional claims.

Are social media companies liable for promoting terrorist content?

In another set of cases decided in 2023, the Supreme Court examined:

  • Whether Section 230 of the Communications Decency Act that shields internet service providers from being legally liable for user-created content also shields them when their algorithms promote or recommend content to others; and
  • Whether a social media company provides substantial assistance to terrorists by allowing them to operate on their platforms and can be held liable for aiding and abetting them in their attacks by not taking more aggressive action to remove their content from their platforms.

The court declined to rule in Gonzalez v. Google (2023) on whether targeted recommendations by a social media company’s algorithms would fall outside the liability of Section 230 of the Communications Decency Act.

Instead, the court said that its ruling in Twitter v. Taamneh on the same day “is sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail.” In Twitter, the court found that social media companies’ hosting of terrorist content, lack of action in removing their content and algorithms recommending their content was not enough to show that they aided and abetted the terrorists in an attack in Istanbul that killed 39 people.

In both cases, families of Americans killed in ISIS attacks had said that by allowing ISIS to post videos and other content to communicate the terrorists’ messages and to radicalize new recruits, the social media platforms aided and abetted terrorist attacks that killed their relatives, allowing the families to seek damages under the Anti-Terrorism Act.

Courts have examined online stalking, ‘true threats’ on social media

Online stalking also has been at the center of First Amendment cases in which courts have had to decide whether repeated unwanted and threatening communications to a person over social media are “true threats” and unprotected by the First Amendment.

In legal parlance, a true threat is a statement that is meant to frighten or intimidate one or more specified persons into believing that they will be seriously harmed by the speaker or by someone acting at the speaker’s behest.

In Elonis v. United States (2015), a case involving posts on Facebook, the U.S. Supreme Court reversed a criminal conviction under a federal stalking statute because the jury was instructed that it only had to show a reasonable person would view the speech as threatening and not consider the mental state of the speaker. The court did not, however, state what standard of proof was necessary to determine the speaker’s intent. Must it be objective, a reasonable person considering the facts and context? Or must it be subjective, proving a person’s understanding of the effect of the messages when sending them?

The court provided additional guidance on what constituted a "true threat" in a different stalking case involving a man who made posts about a female musician on Facebook. In, Counterman v. Colorado (2023), the U.S. Supreme Court vacated the stalking conviction of the man, sending it back to the lower court for reconsideration.  The court ruled that the First Amendment’s protection of free speech requires that prosecutors show that he was aware of the threatening nature of his communications.

New laws, regulations restrict use of TikTok

In 2023 and 2024, states and the federal government started banning access to TikTok on government devices. Some states have also passed legislation that would effectively bar TikTok use in their state. The bans arise from concerns that a Chinese company owns the popular video platform and that the Communist government of China will harvest data on Americans and use the platform against America's interest. The concerns largely are centered around privacy and national security.

Courts have upheld the government's right to bar the use of TikTok on government devices and even on public university Wifi networks. However, questions have arisen about how far regulations can go in barring TikTok use more broadly.

For example, Montana became the first U.S. state to attempt to ban TikTok in a law passed in May 2023. A judge temporarily halted the law from going into effect after TikTok filed a lawsuit arguing that the ban constituted prior restraint on speech, which is unconstitutional under the First Amendment. The state says it plans to appeal.

Social media posts have gotten public employees, students in trouble

Former Mississippi high school rapper Taylor Bell is flanked by his attorneys, Wilbur Colom, left, and Scott Colom, right, after the May 12 oral argument in his First Amendment case before federal appeals judges in New Orleans. Bell was punished for a rap song he created and posted on social media off-campus. (Photo Credit/Frank LoMonte, with permission to republish by Frank LoMonte)

Many government employees have faced discipline for Facebook posts about their bosses, co-workers,   or for comments related to core functions about their job that the employers viewed as inappropriate. Courts have reached different conclusions, based on the circumstances, on when that discipline violates free speech rights of public employees.

For example, the 6th U.S. Circuit Court of Appeals in December 2022 upheld the termination of a Maryville, Tenn., police officer over Facebook posts that were critical of the county sheriff. The court reasoned that the local police department had an interest in maintaining a good working relationship with the sheriff’s department and this trumped the officer’s free-speech rights.

Student speech cases have also arisen out of social media posts, including for such communications that occurred off-campus.

For example, in Bell v. Itawamba School District (2015), the 5th U.S. Circuit Court of Appeals determined that public school officials could punish a student for a rap song he created off-campus and posted on Facebook and YouTube. The video referenced two teachers at the school who allegedly had engaged in sexually inappropriate behavior with female students.

However, in Mahanoy Area School District v. B.L. (2021), the U.S. Supreme Court said that a cheerleader’s vulgar post to Snapchat after not making the varsity squad did not pass the substantial disruption test used in student speech cases and the student’s free speech rights protected her from school discipline.

This article was published in April 2023 and was updated in Feb. 16, 2024. Deborah Fisher is the director of the Seigenthaler Chair of Excellence in First Amendment Studies at Middle Tennessee State University. Parts of this article were contributed by David L. Hudson, Jr., a law professor at Belmont who publishes widely on First Amendment topics.

How To Contribute

The Free Speech Center operates with your generosity! Please donate now!