Next week, the Supreme Court is set to hear cases involving the content moderation practices of social media platforms, including Gonzales v. Google. At the heart of the cases is Section 230 of the Communications Decency Act. If you don’t know, Section 230 has governed online expression since it was enacted in 1996. In general, it protects online platforms that allow third-party publishing from being held liable for the content those third parties publish. Here’s the crux:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In other words, individuals should be held responsible for their speech, not the platforms. If you create the Tweet, the blog, or the video, you’re responsible for it. Section 230 has been called “the most important law protecting internet speech.”

Section 230 was passed as a bipartisan bill and has allowed the internet to scale and grow to the level it has. Currently, politicians on both sides want to see it removed—including the current and former president, for different reasons, of course.

The Court will decide whether Section 230 shields companies like Google and Twitter from civil liability for the content that is moderated and curated based on their algorithms.

This is a big deal. Section 230 was a policy that has allowed the internet to grow and become a haven for free speech. That’s why the court should uphold Section 230.

If, from the Internet’s inception, every harmful post that slipped through the curating process could be the basis for a lawsuit, the speech moderating clamps would’ve been too restrictive for the internet to become what it is today. Everything that posed even the slightest threat of a suit would’ve been stamped out and restricted—whether a blog, a Reddit post, a Tweet, an Instagram story, or even a Yelp review. In fact, it’s doubtful whether those companies would exist at all. The Internet as a public square for ideas wouldn’t exist today. Political discussions, religious debates, and even spirited movie discussions would be thrust out of sight. It’s simple: The internet wouldn’t be what it is today without Section 230.

Paul Barret— a senior research scholar at the Center for Business and Human Rights at New York University’s Stern School of Business, writing in The Hill, said: “If faced with the wave of litigation that Congress sought to prevent, social media companies almost certainly would restrict the third-party expression they allow on their platforms. An enormous volume of user-generated speech would disappear. And large platforms would not be the only ones affected. Litigation threats could shut down crowdsourced sites like Wikipedia, consumer review businesses like Yelp and all manner of websites and blogs that invite user debate. New startups might never get aloft.”

At the core of these new cases is the attempt to distinguish content that is simply on the site from content that is recommended. The problem is that this is impossible. “The presentation of third-party material by platforms,” writes Barrett, “almost always involves some form of recommendation by an automated system driven by artificial intelligence, which identifies and retrieves content based on users’ past online behavior and preferences.”

In other words: Recommendations from algorithms are impossible to work around. Without algorithms, everyone’s social media feed would only have what everyone posts in real time. It would simply be a scrolling feed of everyone and everything, a real-time forum of every thought that crosses the individual mind from Bangladesh to Boise. And that’s all you could see. Moreover, if sites could also be held liable for any perceived harmful view, it would indeed be a very dull feed: no dissenting views, no debate, and no controversy.

The original co-authors of Section 230, U.S. Senator Ron Wyden, D-Ore., and former Representative Chris Cox, R-Calif., submitted an amicus brief to the U.S. Supreme Court asking the court to uphold decades of precedent surrounding the law. Algorithms were in mind when they crafted the bill in 1996.

There must be some form of algorithmic recommendation for these sites to function as they do, and, as we know, the algorithms are based on the user’s past behaviors, likes, interests, etc. Therefore, to ask the court to include a narrow provision that goes after targeted recommendations is to do away with the whole. In the end, the Court should uphold Section 230, just as the lower courts did.