In Twitter, Inc. v. Taamneh (2023), the Supreme Court ruled that social media companies did not “aid and abet” an ISIS terrorist attack simply because their algorithms recommended ISIS content or they failed to remove content that recruited members and spread terrorist messages.
The case overturned a ruling by the 9th U. S. Circuit Court of Appeals that relatives of a man killed in the Reina nightclub attack on Jan. 1, 2017, in Istanbul, had shown that the companies’ actions (and inactions in removing content) met the test that allowed them to seek damages under the Anti-Terrorism Act. The law allows secondary civil liability for those who aid and abet “acts of international terrorism.”
The Court ruled on the same day in a similar case, Gonzalez v. Google, in which it was reviewing whether Section 230 of the Communications Decency Act which immunizes internet service providers from liability for content on their platforms also would immunize them when their algorithms recommended content. In that case, the Court said it need not resolve the issue because under the Twitter ruling, most of the plaintiffs’ arguments about liability for aiding terrorists would fail.
Thomas compares social media to cellphone, email services
Justice Clarence Thomas wrote the opinion in Twitter in which he likened social media platforms — the ones at issue here were Twitter, Google and Facebook — to other companies that provide cell phone, email or internet services.
“(W)e generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones — even if provider’s conference-call or video-call features made the sale easier.”
Thomas disagreed with the family members’ assertion that the recommendation algorithms went beyond passive aid to the terrorists and constituted substantial assistance.
“Viewed properly, defendants’ ‘recommendation’ algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content.”
Not removing terrorist content does not rise to ‘aiding and abetting’ crime
Thomas also disagreed that failing to stop ISIS from using their platforms, even though the companies knew it was a terrorist group and was using them to spread its message and recruit more terrorists, equated to affirmative misconduct.
He said the social media companies did not treat ISIS any differently from any of their other “billion-plus” users: “arm’s length, passive, and largely indifferent.”
He said the family’s claims may have carried more weight it they could have pointed to some independent duty that required social media companies to remove ISIS content or terminate customers when they realized customers were using the services for illicit ends.
He also said that under “plaintiffs’ theory, any U.S. national victimized by an ISIS attack could bring the same claim based on the same services allegedly provided to ISIS.”
To show the social media companies “aided and abetted” the terrorist group, the families would have needed to prove more.
“Yet, plaintiffs have failed to allege that defendants intentionally provided any substantial aid to the Reina attack or otherwise consciously participated in the Reina attack — much less that defendants so pervasively and systematically assisted ISIS as to render them liable for every ISIS attack.”
Deborah Fisher is director of the John Seigenthaler Chair of Excellence in First Amendment Studies. This article was published May 23, 2023.