Home » News » Negligent AI speech: Who is responsible?

By Vaughan James, University of Florida, published on November 30, 2023

Select Dynamic field

Image courtesy iStock

Ready or not, it appears that the age of AI is upon us.


Practically overnight, AI technologies like ChatGPT have slipped into the fabric of daily life, poised to revolutionize the way that we work, the way that we communicate, and even the way that we seek information.


There are downsides to the swift adoption of AI, however, especially when our use outstrips our ability to control it. After all, to paraphrase Arthur C. Clarke’s famous Third Law: any sufficiently advanced technology is indistinguishable from a regulatory nightmare.


What happens when an AI platform gives out incorrect information that then causes harm to those that use it? If ChatGPT were to tell you that a particular type of mushroom was safe to eat when it is in fact toxic, and you were to eat it and fall ill, who would be at fault there? Is it you, for believing something you’ve read on the internet? Is it ChatGPT, for creating a string of words that were plausible but factually incorrect? Or is it OpenAI, the AI research and deployment company that developed the ChatGPT platform, for not making sure that its responses were accurate?


Jane BambauerUniversity of Florida College of Journalism and Communications Brechner Eminent Scholar and director of the Marion B. Brechner First Amendment Project, provides insight into these complex questions in her legal essay, “Negligent AI Speech: Some Thoughts About Duty.”


Bambauer examines the potential for AI liability through the lens of “duty” – whether responses from an AI platform would have a legal obligation to adhere to standards of reasonable care toward users. Without legal duty, AI platforms could not be held to be negligent; that is, even if they caused harm, they could not be held accountable if they had no requirement to avoid harm. This is of pressing concern since AI platforms are still so new. Regulations concerning their use are still very much in their infancy, meaning that civil courts stand to have a lot of impact in the way that the platforms are regarded moving forward.


Bambauer suggests that a determination of duty is going to come primarily from the type of relationship that courts believe AI platforms and users share. The application of liability is likely to be entirely different if AI and users are strangers to one another versus if they are determined to have a special relationship.


If courts view the relationship like one between two strangers, who have little reason to trust one another about topics related to personal health, then it is likely that AI platforms will be treated like a form of pure speech or mass media. With pure speech, e.g., books, conversations, posts on social media, it is generally difficult to claim negligence even if there is incorrect information. The ideas spread through speech might exist in physical form, such as in the form of a book, but they are not treated as “products” under product liability rules when the harm comes from the meaning of the words.


The same logic has applied to new digital media. The Supreme Court recently found that Twitter and Google have been found to have no duty when their platforms are simply being used as intended, even when that use has been directly connected to harm or criminal activity.
As long as no special provision or treatment has been given to users that intend to engage in harm, platforms themselves aren’t held responsible.


However, with advances in technology, it isn’t difficult to imagine the development of highly specialized professional AI platforms for fields like medicine and law. With such advancement, AI could come to be viewed by the courts as a virtual physician or a virtual financial advisor, creating the duties of care that professional fiduciaries owe to their clients.


It is clear that unless legislatures pass laws regulating AI applications, courts will have a huge responsibility establishing precedents around duties of care for AI platforms. It is also apparent that the path to proper regulation will be a winding, difficult one. Ultimately, Bambauer argues that “Courts are probably best served by starting with the presumption that AI output should be treated the same as Google search results or mass media products, as far as tort duties are concerned.”


Whether they do (and whether they consult ChatGPT first) is really anyone’s guess.


The original article, “Negligent AI Speech: Some Thoughts About Duty,” appeared in the Journal of Free Speech Law on May 13, 2023.

This summary was written by Vaughan James, UFCJC Ph.D. 2022.


Republished courtesy the University of Florida through a Creative Commons license.


The Free Speech Center newsletter offers a digest of First Amendment- and news media-related news every other week. Subscribe for free here: https://bit.ly/3kG9uiJ

YOU MIGHT ALSO LIKE

More than 1,700 articles on First Amendment topics, court cases and history