America is the undisputed leader in the online space, with some of the most successful and innovative tech companies in the world. One key reason for this leadership is Section 230, which refers to the section of the federal Communications Act of 1934, enacted as part of the Communications Decency Act of 1996, that provides limited federal immunity to providers and users of interactive computer services. According to the Congressional Research Service, the statute “generally precludes providers and users from being held liable—that is, legally responsible—for information provided by another person, but does not prevent them from being held legally responsible for information that they have developed or for activities unrelated to third-party content.” Over the decades, this equilibrium has allowed the internet to grow and flourish. Recently the law has come under fire from those who misunderstand its importance.

What does Section 230 actually say? “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In plain English, when a website hosts speech or other types of content such as a video or comment thread, the website isn’t considered the author of that content.

In less than 30 words, this provision has two huge effects. It protects companies from lawsuits for speech they didn’t create and allows them to remove some of the most objectionable online content without fearing a lawsuit. For example, a website cannot be sued for not deleting inflammatory comments from one of its posts. Section 230 lays the foundation for both digital freedom and moderation of inappropriate content. It’s crucial to the internet as we know it.

Critics often misstate Section 230 as total immunity and permission for websites to act recklessly, but this is a fundamental misunderstanding of its goals and consequences. Section 230 is not a free pass; it does not exempt platforms from liability for content that violates federal laws or other legal obligations. The statute also does not prevent providers from moderating content they deem inappropriate or lewd. Under Section 230, private companies are allowed to establish their community rules and enforce them as they see fit, in order to deliver the best experience to their users. The result is a balance wherein websites are incentivized to block illegal content and permit free speech.

Another common misrepresentation of Section 230 is that it favors and empowers “Big Tech” companies. The opposite is true. Section 230 is the reassurance that developing technologies and smaller companies need to continue innovating. Large corporations are well equipped to handle the barrage of lawsuits that would occur in the absence of 230, but startups are not. Eliminating or narrowing Section 230 would stop competition its tracks with the threat of total legal responsibility for outside content.

Over the years, several attempts, have been made to weaken Section 230. Such proposals include removing its protections for internet platforms like social media or generative artificial intelligence. A new proposal even goes so far as to get rid of Section 230 completely. Section 230 makes online free speech and innovation possible. It is the protection that has fueled digital progress in America. Without it, free speech online would suffer and there would be an increase in harmful online content. Understanding what the law actually does is the first step to combatting efforts at sending the internet back decades.