The FCC and Section 230

One of the oddest areas of law under the jurisdiction of the FCC is 47 U.S.C. § 230. This is the law that shields ISPs and social media platforms from liability for content created by users. Since this law falls under the FCC’s jurisdiction, it means that the FCC has a role in regulating the Internet and the large companies that operate there. That is far outside the scope of everything else the FCC does.

Congress clearly intended to give the authority to regulate Section 230 to the FCC. The Telecommunications Act of 1996 included section V, which was titled The Communications Decency Act of 1996 that included the new language for Section 230. When Congress placed Section 230 law into the Communications Act, it was clear that it wanted the FCC to oversee the nascent Internet that was just starting to reach public awareness. The FCC’s jurisdiction over Section 230 was bolstered by two Supreme Court Decisions authored by Justice Antonin Scalia—AT&T Corp. v. Iowa Utilities Bd. in and City of Arlington v. FCC, in 2013.

The most important provision of Section 230 says, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It further says that no “provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

The term “interactive computer service” is commonly understood to include websites that host or moderate content generated by others, such as social media platforms. It also covers a wide range of other content like comments made by readers in online newspapers, customer reviews of products on Amazon, and anywhere else where the public gets to express an opinion on the web.  Section 230 has been used to shield online platforms against most civil suits that are based on what users say. Without Section 230, social media sites would be subject to lawsuits from those who find any content to be unacceptable. It’s not a stretch to say that the Internet, as we know it, could not function without the protections provided by Section 230.

Section 230 is very much in the political news these days. There were 25 bills introduced in Congress over the last two sessions to modify Section 230 language. The bills look at a range of different topics. One set of proposed regulations would provide more protection for children online. The Senate recently passed the Kids Online Safety Act which would create a duty of care for online platforms to take steps to protect young users. If passed by the House, this would be the most significant online safety statute since the Children’s Online Privacy Protection Act in 2000.

Other proposed laws deal with content moderation by online platforms. Republican lawmakers are asking for less content moderation to fight what they believe is bias and censorship of conservative views. Bills from Democratic lawmakers want platforms to take a more active role in removing disinformation.

Congress has also been discussing how to deal with content generated by AI. When Section 230 law was written in 1996 there was no concept that computers could direct generate content. It’s not clear that AI content has or should have the same kinds of protections afforded to human users. There has already been numerous examples of AI generating controversial content.

There is even a movement in Congress to kill Section 230, with the thought that a repeal would force lawmakers to get serious about creating a modern replacement law. In the current environment, lawmakers are not only squabbling with each other about these issues but have to contend with intense lobbying by big tech companies that resist any changes to Section 230.

To anybody who follows the FCC, the idea of the agency getting involved in these kinds of messy controversies seems alien to the rest of its mission statement. But since issues related to content moderation are heating up, and it’s likely the FCC will get pulled into the fray.