Due Process for Content Moderation Doesn’t Mean “Only Do Things I Agree With”
Due Process for Content Moderation Doesn’t Mean “Only Do Things I Agree With”
Due Process for Content Moderation Doesn’t Mean “Only Do Things I Agree With”

    Get Involved Today

    There’s a common theme in many proposals to amend Section 230 of the Communications Decency Act — the idea that companies need to just follow their terms of service consistently and fairly.

    Of course, I agree. Who doesn’t? As I detailed in a paper in 2018, I believe that dominant platforms should give their users due process rights, including at times explanations of content moderation decisions, which would include an explanation of why particular pieces of content violate a platform’s terms of service, and responses to objections that platforms are behaving arbitrarily or inconsistently.

    The PACT Act, recently introduced by Senators Brian Schatz and John Thune, tackles this issue in a thoughtful way. Instead of allowing people to appeal content moderation decisions to the Federal Trade Commission, it requires that platforms give users procedural rights, provide transparency, and explain themselves. It specifically forbids the FTC from examining particular applications of a platform’s acceptable use policy — only procedural violations could be taken up by the FTC.

    The details of what processes platforms should follow are certainly subject to critique — I’m not sure that the idea that platforms should provide a call center, which the bill suggests, makes a lot of sense. Neither do I think that it is central to the bill. Also I’d prefer, at least to start, by applying any new requirements only to the very largest platforms, instead of exempting very small ones. That said, this aspect of the PACT Act (as well as the provision that platforms lose Section 230 protection for content that has been adjudicated to be unlawful, not merely asserted to be so, coupled with significant penalties for false representations), should serve as a model for how changes to online liability and content moderation policies can be structured to avoid certain major pitfalls.

    In contrast to the PACT Act are recent proposals by the Department of Justice and Senator Josh Hawley that seem intended to limit the discretion of platforms to remove content they feel violates their policies, by requiring an unrealistic level of specificity in their terms of service, and providing legal remedies for people who simply disagree with particular moderation choices.

    It seems reasonable to many people that platforms should simply spell out what their content policies are, and then be required to follow them. But a few examples show how difficult this is in practice, when it comes to inherently subjective content decisions.

    For example, most platforms have policies against “hate speech” or hateful conduct. Few people would argue that a Neo-Nazi post isn’t hate speech. But, there are some people who think that Black Lives Matter content, or posts arguing for police abolition, or reparations, are forms of hate speech as well. Some people think that posts supporting Israel are hate speech, some people think that posts opposing Israel are hate speech. If a Neo-Nazi has his post taken down, should he be able to sue the platform for being “inconsistent” for not taking down an Israel post? Or for not specifically describing what constitutes Neo-Nazi content in its terms of service? Or for not explaining in advance what particular kinds of content are not hate speech? What if an academic screenshots the post and reposts it with commentary describing why it is hateful? To me, it seems reasonable that a platform might take down the original post, but nevertheless allow people to repost it with such commentary. Others might see that as a double standard favoring some kinds of speakers or points of view. Should platforms be required to create a legal document explaining the nuances of when the same piece of content is hateful incitement, and when it is a proper subject of critique? These are all extremely important questions that reasonable people can disagree on, but they are not the kind of thing that can be resolved through the legal system. They are value judgments informed by people’s experience, by their judgment, and their worldviews.

    Context matters, too. “Altered video” by the Daily Show which viewers know is satire is very different from “altered video” by a political candidate without any disclaimer. The same words mean something different depending on who says them, when they say them, and who the audience is. Fundamentally, decisions about content moderation are editorial choices, and there will always be subjectivity, differences of opinion, and borderline cases. The protests across the nation against the murder of George Floyd have highlighted how wide differences of opinion may be — different people simply interpret the same events differently, even when they agree on the same set of underlying facts (which is itself becoming rarer). While it may be reasonable to expect platforms to explain their decisions and be open to counterarguments, it is not reasonable to try to limit the scope of individual editorial decisions with legalese and bureaucratic processes.

    To many platform critics, any content decisions they disagree with are evidence of bias, censorship, or “bad faith” selective enforcement. Senator Hawley’s proposal would define any content moderation choices that do not exactly track detailed written policies as “bad faith” not protected by the law, and the DOJ has even proposed to eliminate Section 230 protection for platforms that take down content they find “objectionable” — instead having the law spell out for platforms a short list of what kinds of content they may freely moderate. To be clear, in some instances, platforms might actually be biased, and might actually be selectively enforcing their rules, due to resource constraints or even for ideological reasons. Two different platforms might interpret identical acceptable use policies differently and come to contrary decisions about specific pieces of content. There might be good arguments as to why one is wrong and the other is right. But this still doesn’t turn disagreements of this sort into the kind of thing the legal system can resolve. Again, while proposals like the PACT Act entitle people to an explanation from platforms for their decisions, they don’t give people a right to appeal to some higher authority due to simple disagreement as to the details of a particular decision.

    I believe that many people look to how the FTC can enforce a platform’s terms of service with respect to privacy policies, and are extrapolating from this, leading to a policy solution that more likely than not wouldn’t work. For context, while there is no federal privacy law at the moment (which is something we’re working to correct), when a company makes specific privacy promises in its terms of service and then doesn’t follow them, that could violate the FTC Act as an “unfair or deceptive practice.” FTC enforcement of promises of this kind is a good idea, especially in the absence of many other legal tools to protect consumer privacy. But it’s far from ideal — it likely just encourages companies to adopt vague and unspecific privacy policies where it is very hard to determine whether or not there has been a violation, or to just refrain from making specific privacy guarantees.

    When it comes to enforcing a platform’s acceptable use policy, there is a similar danger: The “just enforce the terms of service” approach risks companies simply reserving the right to moderate content for any reason, essentially restating the law which says that platforms cannot be sued for removing content that they find “objectionable, whether or not such material is constitutionally protected.” Or, if in response to a requirement such as the DOJ’s notion that platforms should only apply “plain and particular” terms, it could lead to platforms adopting a “hands off” approach where they moderate very little content, allowing harmful and obscene content to be distributed freely, for fear of incurring liability for failing to meet the impossible task of defining in clear terms exactly what a harmful post looks like ahead of time.

    Online speech platforms like social networks and user-generated content sites should not operate like common carriers with an obligation to simply act as passive carriers for user content, or anything like it. Common carriage is a great, useful idea for basic infrastructure providers, but unmoderated and or lightly-moderated social media sites are not good for speech. Allowing hateful and harassing content, misinformation, scams, and conspiracy theories, and any number of other abuses to be posted and hosted freely is not good for free expression. It drives people off the platform, drowns out worthwhile speech, and silences marginalized voices. For some platforms and their intended audience, unwanted violent or obscene content can drive their targeted audience away. Yet proposals to use terms of service to tie the hands of platforms would reduce their ability to create useful platforms for free expression for particular audiences, and I believe would not have the result that their supporters envision.

    Does this mean that we should simply accept arbitrary and inconsistent behavior from major platforms? Of course not. As I’ve said, platforms should be required to explain their reasoning, and procedural requirements of the kind found in the PACT Act would likely bring more consistency and predictability to content moderation decisions, without putting the government in the position of choosing between “good” and “bad” speech. I’d like to see platforms evolve their moderation practices in a way reminiscent of the common law, where individual cases serve as a precedent for later, similar cases, which provides for some level of consistency over time while still providing the flexibility to adapt to unforeseen circumstances.

    People who disagree with the editorial and content choices of dominant platforms should support competition and interoperability rules that empower users, give users more choices, and would allow different platforms to differentiate from each other based on the quality and kind of content moderation they perform. We will continue to call attention to instances when, in our opinion, platforms are doing well, and when they are doing poorly — we recently submitted a letter to the House Energy & Commerce Committee highlighting our research on what an effective content moderation regime may look like as it pertains to disinformation. Critiques of this kind, however, do not mean that we should seek to prevent platforms from applying their judgment in particular cases by requiring perfect compliance with impossibly-detailed terms of service.

    Image credit: mikemacmarketing/vpnsrus.com