Section 230 FAQs

Section 230 of the Communications Act protects social media companies from liability relating to hosting, editing, or taking down user-generated content. It is one of the most wide-reaching laws that affect the internet. It’s also one of the most misunderstood. Our experts weighed in on some of the most commonly-asked questions we’ve heard about the Section 230 below. 

Contents:
Does Section 230 mean platforms can delete opinions they don’t agree with?
Why should I care about platform liability?
How does Section 230 protect free expression?
Why should I trust Big Tech to make decisions about content?
How can a platform decide what’s true and what isn’t when it comes to misinformation?
Doesn’t Section 230 just give more power to platforms that already have too much power?
Are platforms biased against conservative voices?
Under Section 230, are platforms allowed to be editors instead of just publishers?
What’s the benefit of banning hate speech?
What next steps can we expect to see on Section 230?

Does Section 230 mean platforms can delete opinions they don’t agree with?

John Bergmayer:

Private companies have a right under the First Amendment to delete any content they disagree with, basically for any reason. Section 230 bolsters and strengthens that right. The further question is whether that’s a good idea, and do we want to change the law so they can’t. 

The structure of the marketplace is not ideal right now because we have this handful of dominant platforms. So, if you’re someone who has an opinion that gets deleted from the major platforms, it might seem to you like censorship and that you have no way of getting your idea out there. 

The solution to that is not to tell platforms that they have to operate as though they’re the government subject to the First Amendment. A better approach is to just have lots of different platforms with lots of different points of view, and if you find one platform more hospitable to your way of thinking, then that’s the platform you go to. Other people go to a different platform, and so forth. 

Public Knowledge generally supports proposals around interoperability whereby you can have different platforms that all communicate with each other. So that even if you’re on this platform and your friends are on a different platform, you should still be able to communicate with each other. It should be just like how you don’t all need to have email with the same email service, and how you don’t all need to have a phone service from the same phone company. This way, it doesn’t seem like you have this big, scary company who’s preventing you from speaking your mind, because you have lots of different companies. 

Why should I care about platform liability?

Greg Guice: 

Section 230 at heart just means that platforms can offer users a place to go and express their views on a range of topics, from restaurant reviews to political news – heck, even commentary on a field of emus – without incurring liability for what those users say. This is what user-generated content is. 

Without this law, you wouldn’t be able to tweet, review restaurants on Yelp, complain in the comments section of your favorite digital news site, or engage your favorite Reddit community, because these platforms would be terrified of the liability that would arise from your post. This is the law that keeps the internet humming along, enabling people from all over the world to talk to each other freely. 

This law also allows platforms to develop an array of communities, by moderating out content that is contrary to the community standard they developed and you agreed to when you signed up to join the community. So, for example, this prevents people on their favorite cooking app, or those discussing baseball on the MLB app, from being inundnated with rants from people with white supremacist talking points or sharing inappropriate images – and I’m not talking about a fallen soufflé. 

Section 230 provides platforms this flexibility so they can actually promote more speech consistent with those users’ expectations. An unmoderated network, particularly one with public discussions, would be overwhelmed with low quality content, abusive users, and spam. 

How does Section 230 protect free expression? 

Lisa Macpherson: 

One of the reasons that the advent of social media has been so exciting is that it’s interactive and it offers limitless channels. In theory, that offers greater opportunities for all voices, including those who haven’t had access to traditional media, to be heard. But as a few platforms have become very dominant, the price of being excluded from those platforms and from that dialogue has become really high. So enabling free speech, at times, does require fostering an environment where voices aren’t suppressed by hate speech, or the abusive or misleading information from other users. Section 230 encourages platforms to establish rules of the road, and then to enforce those rules in its content moderation efforts, and by doing so, to make way for a wider variety of voices. Of course, the platforms should be transparent, and have an appeals process, but we believe this content moderation by the platforms actually enables freedom of expression for a wide variety of voices. 

Why should I trust Big Tech to make decisions about content? 

Charlotte Slaiman:

We don’t want a situation where you have to trust Big Tech! Consumers should have choices when it comes to social media, and because of weak competition policy, we don’t. Public Knowledge has been working hard on a legislative and regulatory agenda to promote competition against the largest tech platforms. These competition proposals could also help put users in the driver’s seat when it comes to questions about what content we see. Learn more and contact your members of Congress at publicknowledge.org/DigitalRegulator.

How can a platform decide what’s true and what isn’t when it comes to misinformation?

Alex Petros:

The great thing is that platforms don’t have to act alone. They can partner with fact checkers and authoritative sources to get verified, accurate information out there. This includes sources like the Associated Press or the CDC. Done correctly, this would allow users to educate themselves, and limit the spread of false, disproven narratives. Another great thing about fact-checking is that it’s a task that journalists naturally do really well. As funding for journalism – particularly the all-important small, local, and diverse variety – has dried up, this could represent a really important new revenue stream for them. For more details, check out our Superfund for the Internet proposal

Doesn’t Section 230 just give more power to platforms that already have too much power? 

Greg Guice:

While there’s a lot of focus on Big Tech platforms, there are thousands of platforms, and new ones emerging all the time that benefit from the protections afforded by Section 230. That said, it is true that large platforms do have too much power – but not because of Section 230. It’s because of other issues in the market, like network effects, market consolidation, and other forms of power that confer on the largest platforms’ market power and gatekeeper status. 

So when a user’s content is taken down by one of these platforms, it feels like they’ve been silenced because it doesn’t feel like a person can go elsewhere and actually be heard. This fact points to the power of these platforms. 

To address market power like this, we need to think of solutions that make certain consumers have more choices and can easily switch between those choices. That points to a need to foster a more competitive marketplace. That is what Public Knowledge is working to bring about. 

We ultimately need stronger competition policy to diminish the power of these companies. With more competition, platforms will feel pressure to do more and better moderating of misinformation and harassing content, since people will be more willing to leave if they are unsatisfied. 

Are platforms biased against conservative voices?

Lisa Macpherson:

Some Republican and conservative political and media influencers base their proposals for Section 230 reform on the idea that conservative content is censored by social platforms. Researchers have taken this accusation seriously, but the research consistently shows that conservative content, far from being suppressed, consistently earns more interaction than left-leaning or even politically neutral pages or accounts. In fact, a small number of conservative users routinely get more conversation and engagement than their liberal counterparts or even traditional news outlets. Facebook’s own data shows that the top performing link posts, week after week, are generally from conservative voices. In fact, some tweaks that Facebook made to its news feed algorithm in 2018 had more of an effect on cutting exposure for progressive news outlets than it did for conservative ones. So there may be good reasons to reform or update Section 230, but conservative bias isn’t one of them. 

Under Section 230, are platforms allowed to be editors instead of just publishers? 

John Bergmayer:

The answer to that question is yes, platforms are allowed to be editors. Saying editors versus publishers doesn’t really make a lot of sense, because all publishers are editors. The act of being a publisher is choosing what you want to publish. It involves curation and it involves editorial choices, so those are basically the same. Section 230 does not say that platforms have to pick a particular method of operation. Section 230 does not say that platforms have to leave everything up. Section 230 also does not say that platforms have to heavily edit what they allow to say on their site. Section 230 says instead, it’s up to you, platform, how you want to operate your own service. 

What’s the benefit of banning hate speech? 

John Bergmayer:

We want to make sure that the government follows the First Amendment, which means that except for some very narrow categories of speech, essentially you can never be punished for what you say. You can’t be put in jail, the government can’t fine you, and you can’t be censored by the government. That’s appropriate. The government has police, it has the military, it taxes people, and it has a lot of power over people’s lives. It’s also true that private companies have a lot of power over people’s lives, but there is a solution to that problem, which is that we want there to be more robust competition between different platforms that might tackle these issues differently. 

In terms of promoting free speech in a more general sense, apart from the First Amendment, I think that removing hate speech from platforms that are intended for mainstream users is a way of promoting free expression, because it’s very hard to get your ideas out there and to communicate with like-minded people if you’re in an environment where you’re being attacked, you’re being harassed, every time you say something you have a thousand people saying nasty things to you, and if just using the site is unpleasant and stressful because you’re constantly seeing these hateful kinds of messages which are very hard to filter out. Banning hateful speech, or violent speech, or just certain categories of speech, seems to some people to be against free expression. But we’re not saying that those categories of speech necessarily should be illegal, or that there shouldn’t be a place where people are allowed to communicate with people who want to hear it. It’s actually very helpful to have a more curated and edited environment in which to make yourself heard. 

What next steps can we expect to see on Section 230?

Lisa Macpherson:

Legislators have already introduced ten proposals for Section 230 reform in the 117th Congress, including one for outright repeal. But it may remain just as difficult to get alignment on a specific proposal since in general, the parties want different outcomes. Republicans would like to see less content moderation, and Democrats generally want to see more content moderation, particularly of harmful disinformation and hate speech. That’s why Public Knowledge has proposed a set of principles that we hope legislators from both parties would use to guide their discussions of Section 230 reform. They encourage things like ensuring that platforms have clear due process and transparency in their content moderation decisions, that they protect the voices of traditionally marginalized communities, and that they focus any liability for platforms on their own business activities and conduct, rather than on any specific type of content posted by users. We’d also like to limit reforms to the dominant platforms, or make accommodations for small platforms so that we can encourage competition and choice for users. You can learn more about our principles for Section 230 reform from Public Knowledge at Section 230 Principles to Protect Free Expression Online.