TECH NEWS: COVID-19 Social Media Incident Shows Value of Key Free Speech Protection
As people worldwide go into quarantine and conversations around COVID-19 move increasingly online, thousands of Facebook users received a shocked Tuesday night when they discovered the social media outlet flagged their posts as potential spam content or misleading information. This kind of flagging is typically reserved for purposefully misleading websites or those hawking fake products. But this time, it was legitimate websites like The Atlantic, The New York Post, and USA Today getting the fake news treatment.
In the end, Facebook realized it was a simple technical error and quickly resolved it. However, this incident still serves as a perfect example of how social media could look without a key free speech protection – Section 230 of the Communications Decency Act.
Section 230 is a common-sense law that makes those posting on a platform responsible for the content rather than the platform itself. In other words, Facebook, Twitter, and YouTube aren’t legally responsible for any information or memes you post, even your bad takes on COVID-19.
Section 230 also allows these outlets, along with thousands of other websites, to remove content from their platform without being held liable for content they don’t remove.
The Facebook flagging error is likely due to the abundance of caution it took in trying to remove fake COVID-19 content. During a time when the virus is the most talked about issue online, sorting through content is hardly a simple task, even for one of the most technologically sophisticated companies in the world. Facebook should be commended for trying to prevent purposefully misleading medical content. Even critics of the company have praised it for its efforts to stop the spread of misinformation.
But if Section 230 changes, something many on both sides of the political aisle have been clamoring for, there will be dozens of more examples of the issues Facebook faced this week.
A change to Section 230 could potentially mean that online platforms would be legally responsible for all fake COVID-19 content on their website. In such a scenario, who could blame a website for removing even legitimate posts on the subject when facing potentially significant legal liability?
Another option websites would have, should Section 230 be revised, is to simply not edit their platform’s content at all. This would obviously make it easier for clearly dangerous and misleading posts on how to treat coronavirus could proliferate. Allowing the spread of false or deceptive medical advice would, at best, make the platforms unpleasant to visit, and at worst, be hazardous to people’s health.
Online platforms have been responding well to this developing crisis and are providing people with helpful news updates, as well as a meeting ground to stay connected to their friends and family. We don’t need to go around messing with the “26 words that created the internet.” It’s working exactly as intended, even during this extraordinary time.
By sharing your phone number and/or email address, you consent to receive emails, calls, and texts from Pelican Action. You may opt-out at any time.