George W. Truett

The proposed Kids Online Safety Act prohibits children under 13 from creating social media accounts and allows actions against social media companies that allow it. In this photo, family members at a Capitol Hill rally on Jan. 31, 2024, hold photographs of their loved ones who were harmed on social media. (AP Photo/Jose Luis Magana)

Recent years have witnessed the increased use of social media not only by adults but also by juveniles. Studies have suggested that excessive exposure to social media had led to increasing isolation, particularly among such juveniles. 

In a number of high-profile cases, juveniles who have been catfished (subject to individuals misrepresenting their identity), doxed (having personal information released without their consent), subject to sextortion (threatened or blackmailed about the exposure of shared sexual images), or subjected to online bullying. Some have experienced heightened mental anxiety, and, in some cases, even committed suicide. 

The last congressional legislation designed to protect children on the internet was the Child Online Protection Act of 1998. As Windsor Johnston has noted (2024), this was “before Facebook, Instagram, Snapchat, and smartphones.” 

Provisions of the proposed Kids Online Safety Act

In July 30, 2024, the U.S. Senate overwhelmingly adopted a bill popularly known as the Kids Online Safety Act (KOSA). It was originally introduced and cosponsored by Democratic Senator Richard Blumenthal of Connecticut and Republican Senator Marsha Blackburn of Tennessee. It is now awaiting action in the U.S. House of Representatives, but President Joe Biden has indicated that he will sign if it is presented to him.

The online safety act seeks to prohibit individuals under the age of 13 from creating their own social media accounts. Further drawing from tort law, the bill imposes a “duty of care” upon social media providers that service youth from 13 to 17 to prevent the use of algorithms to direct such users to sites that would lead to exposure to bullying, eating disorders, violence, substance abuse, sexual exploitation, advertisements for illegal products such as toxic drugs, or that might lead to suicide or death (Ortutay 2024).

Seeking only to punish platforms that knowingly target children, the law allows the Federal Trade Commission to take actions only in cases supported by “competent and reliable evidence, taking into account the totality of circumstances, including whether a reasonable and prudent person under the circumstances would have known that the user is a child or teen.”

The law, as presently worded, would not apply to platforms with the primary function of selling commercial goods, teleconferencing or videoconferencing, providing information via encyclopedias or dictionaries, cloud storage, video games, email services, etc. 

As revised, the act would also not shield juveniles from information about LGBTQ-related or reproductive information and would not give state attorneys general the right to selectively seek out information on the basis of ideology. 

Supporters say protecting children online overdue

Supporters of the law argue that protections for juveniles are long overdue. Critics fear that the act might chill speech not directed at juveniles and might give undue authority to the Federal Trade Commission (Cox 2024). Pointing to the decision by Instagram to provide new mandatory teen accounts, some also argue that market responses to consumer complaints are better than those imposed by governments. Others believe that parents should be the primary gatekeepers.

A number of states have already adopted similar laws, some of which have already been challenged in court (Marwick et al, 2024; Sherman 2024).

John R. Vile is a political science professor and dean of the Honors College at Middle Tennessee State University.

How To Contribute

The Free Speech Center operates with your generosity! Please donate now!