How to Go Beyond Section 230 Without Crashing the Internet
How to Go Beyond Section 230 Without Crashing the Internet
How to Go Beyond Section 230 Without Crashing the Internet

    Get Involved Today

    This is the second blog post in a series about Section 230 of the Communications Decency Act. You can view the full series here.

    The previous post was about what Section 230 of the Communications Decency Act does, and why it does it. One theme is that Section 230 is a very broad and powerful statute. But the law can change, and given that digital platforms have a very different role in society and the economy now than they did in 1996, when the law was passed, maybe it should. This post will list some proposals that I am not necessarily endorsing, but which may be worth considering. But before that, it’s also important to realize that Section 230 has limits even under the law today.

    The limits of 230

    However broad Section 230 may be, it does not shield platforms from liability connected with third-party content in every circumstance, nor does it preempt all state and local regulation for platforms. There are cases where platforms may be held liable in connection with third-party content, but these are best understood as outside the scope of 230, not true “exceptions” to it. But understanding the outer bounds of 230’s scope is important, because it points to areas where amending 230 is not necessary for those who want to hold platforms more legally accountable — and those areas where it might be.

    When the platform actually does help “develop” third-party content

    Fair Housing Council of San Fernando Valley v. Roommates.com is the clearest example of a case where a platform can be held liable because, upon a closer analysis, the platform and not just the user “developed” the content in question. In that case, Roommates.com offered a service for people to advertise housing, but the posting form required that users provide information about gender, family status, and sexual orientation. Housing ads with that kind of information break the law, and in this case, it’s not merely the poster who might violate the law — the service itself does. The court reasoned,

    [T]he part of the profile that is alleged to offend the Fair Housing Act and state housing discrimination laws — the information about sex, family status and sexual orientation — is provided by subscribers in response to Roommate’s questions, which they cannot refuse to answer if they want to use defendant’s services. By requiring subscribers to provide the information as a condition of accessing its service, and by providing a limited set of pre-populated answers, Roommate becomes much more than a passive transmitter of information provided by others; it becomes the developer, at least in part, of that information.

    Thus, Section 230 did not shield Roommates.com from liability. While platforms have broad discretion (as publishers) to edit and moderate content after it has been submitted (whether before or after it is actually publicly posted), at least when they are intimately involved in the initial creation of the material, they become liable for it in the same sense as their own blog posts or corporate communications. Recent charges against Facebook that allege that it engaged in unlawful behavior by furnishing tools to advertisers that allow discriminatory ad targeting follow in this path.

    Legal obligations that affect third-party content but are still consistent with 230

    Another case that shows the limits of 230 is Barnes v. Yahoo. In that case, Yahoo promised to take down damaging materials from its service — but then didn’t. Clearly, removing posted material is an editorial function — but Section 230 does not permit a platform to enter a contract to do something, and then not do it. The court noted that very vague promises, or “a general monitoring policy, or even an attempt to help a particular person,” would not suffice to potentially hold a platform liable — but this is generally true in contract law as well. The takeaway is simply that a lawsuit for a breach of contract is not precluded by Section 230.

    A more recent case involves a Santa Monica regulation concerning online rental platforms like AirBnB. (This case is currently being appealed, so its holding could be reversed.) Such platforms do not merely allow people to advertise spaces available for rent, they allow users to complete bookings. To be consistent with Section 230, Santa Monica enacted a law making it unlawful to complete bookings for unlicensed rental units — but not for the rentals to be listed to begin with. The Ninth Circuit held that this was permissible. AirBnB argued — reasonably, it should be noted — that a law that prevented it from completing certain transactions would have the practical effect of causing it to take down listings it could not complete. AirBnB further argued that this meant that the law in effect sought to regulate it as a publisher. But the court held Section 230 does not shield platforms from liability that might have an effect on how it moderates content, as long as the law doesn’t regulate that function directly. If AirBnB violated this law, it would be liable as a payment processor and booker, not as a publisher. In fact, the court noted that AirBnB could have chosen to just leave unbookable listings up, and simply refuse to complete them. While this would be contrary to AirBnB’s specific business model, the Santa Monica regulation would seem to have no effect on an online classified service that allowed users to advertise rentals, but that did not offer to complete financial transactions.

    Critics of the AirBnB decision argue, for policy reasons, that legal rules that make online platform business models potentially unworkable undermine at least the spirit of 230. But at the same time, Section 230 is not intended to shield online services from all local and state regulation — just those laws that directly address liability as a speaker or publisher. Figuring out exactly where to draw the line between the regulation of a platform’s business activities and its functions as a publisher is tricky and fact-specific. But, assuming the 9th Circuit decision stands, the regulation of transactions rather than content may be one approach.

    Potential new obligations for platforms that would likely require statutory change

    Because Section 230 is not an unlimited shield against the regulation of online platforms, local, state, and federal policymakers should not simply assume that new requirements require amending 230. At the same time, some policies will require legislatively amending or superceding Section 230. As Harold Feld recently wrote in his book, “The Case for the Digital Platform Act,”

    Congress should decide what content regulation regime we need…[it] would then simply add at the beginning of the statute the following introductory words: “Without regard to Section 230 . . .”

    In other words, I recommend that we stop arguing about Section 230 and figure out what sort of content moderation regime works. Once Congress resolves this issue, Section 230 will no longer pose a significant obstacle. In the meantime, however, Section 230 should remain in place to preserve certainty until a new regime is ready.

    Certainly Public Knowledge has criticized policies and proposals to change 230 in the past, since some of the proposals for changing standards for intermediary liability are simply bad ideas, if well-intentioned. Requiring by law that platforms take down amorphous categories of speech such as “hate speech” are likely unconstitutional, regardless of their merits otherwise. Laws that limit safe harbors for some kinds of speech but not others likely are, too. In general, proposals that give platforms broad new responsibilities over speech could backfire, creating harms greater than the ones they were intended to solve, or creating tools (e.g., takedown notices) that, if structured incorrectly, could be weaponized by bad actors.

    One example of an ill-advised change to Section 230 was SESTA/FOSTA. This law was well-intentioned and aimed to stop online sex trafficking. However, it enacted an ambiguous “knowledge” standard that has led some platforms to take down whole categories of otherwise-lawful content and could even have harmed the very constituency it was designed to help. Future policy efforts should avoid this shortcoming by delineating more precisely what a platform’s responsibilities should be, instead of merely referencing common law concepts that have not been developed as applied to online platforms. And policymakers should remember that when it comes to choosing between over-protecting themselves from legal liability, and promoting free speech and marginalized voices, platforms will choose to avoid potential liability every time.

    For these reasons, changes to Section 230, or new laws that go into effect “notwithstanding” Section 230, should be approached cautiously. But it would be reckless, as well as politically untenable, to maintain that Section 230’s broad sweep should remain in effect, unchanged. The problems of online harassment and abuse, misinformation, fraud, hate speech, and even election manipulation probably can be addressed — not necessarily through laws that mandate specific content moderation policies, but through changing platforms’ incentives and addressing aspects of their business models.

    To that end, this post will briefly list a few of the ideas that are in the air. I don’t mean to be vague — but they are simply legal concepts that I have discussed with other advocates, lawyers, and so on, that are all preliminary concepts that may be good or bad ideas, but seem like they 1) require new federal legislation to overcome Section 230, and 2) are not facially unconstitutional or likely to lead to overbroad content takedowns. To the extent these concepts reference “greater” liability, this term includes both publisher liability (which, for some purposes, treats the publisher the same as a speaker) and distributor liability (which requires that the distributor have knowledge of the material it’s distributing in order to incur potential liability).

    One idea is greater accountability for some monetized content. Many platforms don’t merely host content, they encourage particular kinds of content by paying posters a revenue share. For example, it has been estimated that one popular YouTuber, Machelle Hobson (referred to in some stories as Machelle Hackney), was paid “between $8,900 and $142,000 a month, and $106,800 to $1.7 million a year” by YouTube for her channel, Fantastic Adventures. But she made the videos by exploiting and abusing her children, and has since been “arrested and charged with child abuse, molestation, child neglect, and unlawful imprisonment of her seven children.” Broadly speaking, paying a revenue share to third-party content providers does not affect a platform’s eligibility for Section 230 protection. After all, such a financial arrangement is a typical publisher function, and this sort of financial support is not the same thing as helping “develop” content. That said, the structure of online platforms, and the facts of what content gets recommended and what doesn’t, directly impacts what content gets created. Financial incentives amplify this. For major business relationships, the various policy justifications for Section 230 — that it allows internet platforms to host third-party content without incurring potential enormous liability or having to pre-screen everything before it is posted, that it allows online platforms to act as vehicles for free speech, and so on — seem besides the point. A change in liability for paid-for, monetized content would not incentivize YouTube to make it more difficult for users to post their noncommerical videos, or even videos that are sponsored, but outside the platform’s system (for example, posting a podcast with its existing host-read ads is not “monetized” content from the platform’s perspective). But if YouTube was liable as a distributor for content posted by business partners it pays hundreds of thousands of dollars to, it might have the incentive to figure out just who it is doing business with, and prevent situations like Fantastic Adventures from happening to begin with. To prevent such a liability system from shutting off platforms as an entry point for small creators where individualized investigation and vetting is not practical, enhanced responsibilities could only kick in at certain monetary or popularity thresholds (e.g., $3K/month, which would put a YouTuber into the highest tier of earners). The general idea behind a reform such as this would be both to recognize the value of platforms as a way for creators to support themselves, and that platforms themselves do not play a passive role in popularizing and supporting content.

    A similar concept would be to impose greater liability on platforms for ads they run, even when those ads are provided by a third party. The online ad marketplace is very complex and confusing, and often websites — due to the technology they employ — have no way to know exactly what ads their users are seeing. Online ads are frequently fraudulent, misleading, or even vectors for malware. A change in liability standards could force the ad tech and online publishing industries to adopt technologies that give them more control and oversight of the ads they run.

    As discussed in the previous post, when platforms recommend or promote content to users, either via human editors or algorithmically, they are protected from liability by Section 230 the same as for content that was uploaded by a user and never reviewed by a single employee of the platform. Yet, when platforms amplify content, instead of merely hosting it, the harm that damaging content can do is increased. Creating a duty of care for such promoted content could create an incentive for platforms to more carefully examine content before they promote it, but without requiring that they pre-screen content before merely hosting it. This could, in turn, limit the reach of extremist content and misinformation, among other things. While shielding platforms from liability for content developed by third parties has a number of legitimate justifications, the rationale for shielding them from liability when they actively amplify such content seems weaker. A policy change such as this would fit in with the current focus on algorithmic accountability. However, it may be difficult to draw the line between merely hosting content, and promoting it, and it would be necessary to consider numerous use cases beforehand.

    A common recommendation with respect to content issues is a notice-and-takedown system of some kind. While the details of the copyright notice-and-takedown system are often criticized by those who think it is too burdensome for rightsholders, platforms, or users, the broad idea still seems quite sound — a platform has a safe harbor for hosting content, but then is somehow notified of its contents, at which time it can either take down the content to maintain its safe harbor, or choose to keep it up and face potential liability. Potential liability does not mean actual liability, it should be stressed — the content in question may be completely legitimate, countervailing factors may counsel keeping it up, or the takedown notice may have simply been erroneous. Some system along these lines may be considered for tortious material — e.g. material that has been properly adjudicated to be libelous, or material that constitutes a clear invasion of privacy, such as revenge porn. Without going into all the details of where such a system could go awry, it is worth noting that any proposed new notice-and-takedown system should have a way to avoid, or deal with, the various failure modes that have been observed in similar systems that have already been enacted. These include both abusive or meritless takedown notices intended to curb speech, the incentive of a platform to take down content without considering the legitimacy of the request, and the “whack-a-mole” problem, where content that has been taken down is immediately re-posted. For example, sending fraudulent or meritless notices should carry severe consequences, and the precise scope of a platform’s duty to limit material being re-posted should be spelled out, in technical detail, in advance. A notice-and-takedown system would be hard to get right, but it remains a viable option for dealing with certain categories of content.

    The Roommates.com case as well as the recent regulatory actions against Facebook have shown that housing discrimination remains a substantial social problem. Other forms of discriminatory advertising, such as for employment, can be equally damaging. For those companies that create tools or platforms specifically tailored to advertisements and postings, Section 230 could be conditioned on their exercising due care to prevent their tools from being used for discriminatory purposes. While Section 230 itself already does not foreclose many important forms of civil rights enforcement, it may make sense to more specifically clarify an online platform’s responsibilities. To avoid the ambiguities and potential for over-moderation present with SESTA/FOSTA, it is important to avoid ambiguous standards, and the best approach may be to specifically delineate a platform’s responsibilities, including which platforms are covered. While no one expects platforms to somehow prevent its users from sometimes acting in discriminatory ways, this proposal could help ensure they take extra care to avoid building tools to facilitate or amplify discrimination.

    As mentioned in the earlier post, claims that argue that an online platform is a defective product or improperly designed in some way, and to hold it liable on that account, are typically barred by Section 230, because the specific way the platform is said to be “defective” typically relates to its functions as a publisher. (This is not always the case, however.) But there is a difference between the fact that a platform may be misused by a malicious actor in one instance, and a platform whose basic design encourages or facilitates abuse. We don’t create broad exemptions to product liability laws just because some products can be misused — instead, the law has developed ways to dispense with frivolous claims other than creating categorical immunities. For example, you can’t sue the manufacturer of the Louisville Slugger baseball bat because its bat was used in an assault, but it would not be immune from suit if it sold a batch of defective bats that splintered during normal use. Similarly, it may make sense to allow defective product lawsuits to proceed against platforms if they show a pattern of harms, and fail to implement industry-standard safety, privacy, and other measures.

    Other design-related responsibilities for platforms could include a greater degree of compliance with accessibility laws, including with respect to third-party content. Incentivizing or requiring that platforms — at least major ones — be designed in a safe manner, and be accessible to persons with disabilities, places no greater burden on them than what other kinds of businesses and services must face.

    A different approach for different kinds of platforms

    Finally, any potential changes or exceptions to Section 230 should take into account the different kinds of platforms that exist today — some of which don’t do anything that looks like “publishing” or “distributing” content in the traditional sense at all. For example, certain platforms that provide basic infrastructure, such as internet access providers, shouldn’t be in the business of moderating content at all. This is why Public Knowledge has fought for such a long time for net neutrality. Arguably, other infrastructure-type services, such as caching and DNS providers, shouldn’t be in the business of moderating and reviewing content — or subject to liability for it. While it would be a stretch to say that all of these kinds of services should be common carriers (as ISPs should be), at a minimum, it should be recognized that their role and responsibilities are very different from that of public-facing social media companies and other platforms. It is appropriate to recognize that their responsibilities under a given duty of care may be different, or to apply a different duty of care to them entirely. As an example, while it may be appropriate for a social media platform or a message board to refuse to host certain content or viewpoints, should the power company, the post office, or a broadband ISP also be able to refuse service to someone with repugnant, but constitutionally-protected views?

    Similarly, some kinds of platforms host content that is not itself objectionable, but is somehow related to it. It is best to consider these situations ahead of time, rather than allowing ambiguities in new policies to give rise to unintended consequences. Two examples are app stores, and links from search engines and social media.

    Apple’s app store might host a newspaper app, but it has no way to control what the newspaper actually publishes. It might host the app for a social network or user-generated content service available for download, but has no ability to moderate what gets posted on those services. GitHub might host a software project that is itself lawful, but can be misused. For that matter, a web browser can be use to access websites of all kinds, including ones that might host tortious or unlawful content. In cases like this it would appear to place the burden in the wrong place to hold the software distributor or browser creator liable, instead of the actual platform, site, or service in question.

    Similarly, search engines and social media sites link out to content, but do not actually host it. On the one hand, attaching liability to linking seems to place a burden on the wrong party. The EU’s recent Copyright Directive, for example, has the potential to hamper internet openness without clear benefit. On the other hand, objectionable content is often spread or amplified because it is linked to from major services, and would be ignored otherwise. But while companies like Facebook or Google might have the ability to monitor their services for outgoing links to problematic content, companies like DuckDuckGo, non-commercial Mastodon instances, or even personal websites could be subject to the same rules. While it may make sense to subject dominant app stores or other internet services to some heightened standard of care even with respect to content they do not specifically host, their responsibilities should be clear and unambiguous, and likely should not apply to internet services more broadly. If anything, the case of social networks, search engines, app stores, and other digital-native services shows that analogies to the analog world, such as “publication” and “distribution” quickly break down.

    ***

    In the first post in this series I attempted to describe what Section 230 does, and why it was enacted. In this post, in addition to describing 230’s outer bounds, I’ve attempted to show that while 230 serves the valuable role of ensuring that platforms can moderate content without incurring excessive liability or facing other counterproductive inventives, the policy of broad, nearly-inviolable immunity from any liability arising from third-party content is not sacred, and there are various options out there for reasonable reforms. While it is premature to endorse any one of them, reasonable reforms would seek to change the incentives of platforms to host, promote, and popularize problematic content, but would not dictate to platforms which forms of speech are acceptable, and which are not. Any reform has downsides that should be considered, and any increased liability would lead to litigation and periods of uncertainty. However, if changes are enacted carefully, the core speech-enabling functions of Section 230, which enable ordinary users to make their voices heard, can be preserved. 230-related policy changes are hardly the only sorts of legal reforms that may be necessary for platforms — I have argued that users of major platforms deserve due process rights, for instance, and Harold Feld just wrote an entire book about the need for a new Digital Platform Act. In addition to regulation and legal reforms, strong antitrust action may be needed to curb the power of digital monopolies. That said, the law and policy surrounding Section 230, and what the duties of platforms with respect to third-party content, is a constant thread connecting many areas of internet policy.