The Pandemic Proves We Need A “Superfund” to Clean Up Misinformation on the Internet
The Pandemic Proves We Need A “Superfund” to Clean Up Misinformation on the Internet
The Pandemic Proves We Need A “Superfund” to Clean Up Misinformation on the Internet

    Get Involved Today

    This blog post is part of a series on communications policies Public Knowledge recommends in response to the pandemic. You can read more of our proposals here and view the full series here.


    “When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.”

    This prediction appeared in a New York Times opinion piece in — wait for it — June of 2019. It was obviously prescient: When the novel coronavirus — now generally known by the name of the illness it causes, COVID-19 — did strike, the World Health Organization warned, “We’re not just fighting an epidemic; we’re fighting an infodemic. Fake news spreads faster and more easily than the virus, and it is just as dangerous.” Public Knowledge has been tracking the efforts of digital platforms to counter the “rumors, misinformation and flat-out lies” about COVID-19 that now appear on the internet at alarming speed and volume. Our goal is to inform policy perspectives about the digital platforms and their approaches to countering misinformation both during and after the crisis. This post is an effort to sift through what we’ve learned and to put forward a specific policy solution — a “superfund for the internet” – for countering misinformation flowing over digital platforms after the pandemic.

    The Politicization of Coronavirus

    The 2019 New York Times opinion piece made another prediction that was slightly less prescient: that a pandemic would provide “an apolitical way to collectively approach the general problem of misinformation and fake news.” After all, the piece pointed out, “[P]andemics are different; there’s no political constituency in favor of people dying because of misinformation.”

    When the pandemic did hit, Facebook CEO Mark Zuckerberg, who has been notoriously resistant to moderation of political content on the platform, also noted, “The difference between good and bad information is clearer in a medical crisis than in the world of, say, politics. When you’re dealing with a pandemic, a lot of the stuff we’re seeing just crossed the threshold. So it’s easier to set policies that are a little more black and white and take a much harder line.”

    That’s not exactly how it’s played out. Unfortunately, information about the novel coronavirus pandemic has become every bit as politicized as what we normally consider highly partisan topics. This information is subject to the same patterns of creation and distribution as content designed to sow division and undermine democratic institutions; it includes content created and shared by trolls and bots, foreign interference, and even content amplified by the bully pulpit, which is being used to convey conspiracy theories, spread medical misinformation, and apply political pressure to federal science agencies.

    The good news about this nightmare scenario is that it makes the pandemic a very appropriate model for how platforms should manage other types of misinformation, including overtly political misinformation. In fact, the very same strategies may be required for both pandemic and political information, since there is now strong evidence of the danger to democracy posed by pandemic-related disinformation from foreign parties, which is being used to weaken democratic checks on power or interfere with elections.

    Harms of Misinformation About Coronavirus

    The politicization of the virus means misinformation about it can cause real and significant harms other than the individual life or death consequences of misunderstandings about the epidemiology of the disease. In fact, it can create many of the same harms as political misinformation, including:

    • Fear mongering and increasing panic and angst
    • Threat to the physical safety of individuals
    • Limiting the effectiveness of official and institutional efforts
    • Sowing mistrust, division, and polarization
    • Fostering racism and discrimination

    The last of these has taken on particular, painful dimensions for COVID-19; communities of color have been at greater risk of violence and death. To be sure, the pandemic has “shined a bright light” on systemic racism and structural inequities that have existed for generations. Data from state after state shows black Americans are seeing higher infection and mortality rates from COVID-19 than other communities, as well as greater economic impacts. This is due to a complicated mix of health disparities (due to underlying health conditions, lack of insurance and access to health care, and substandard housing, among other factors) and distrust of the medical establishment in black communities due to a history of health experimentation. But it’s also due to early and ongoing misinformation about whether black individuals were immune to the disease, and about the government’s response. There has also been a rise in anti-Asian violence since the beginning of the pandemic. Misinformation about the role of Chinese citizens in spreading the virus to and throughout the U.S. has led to increases in bias, discrimination, hate speech, hate crimes, and violence.

    Separating Fact From Fiction About Separating Fact and Fiction

    It’s important to root any strategies to counter misinformation in data about their effectiveness. Academic and social science research provides some strategies for countering misinformation, regardless of content or context. For example:

    • Counter with accurate information: Research consistently shows that most effective strategies are as much about amplifying accurate information as they are about managing misinformation.
    • Evaluate the source: According to Newsguard, over 80% of the 197 sites publishing material misinformation about the coronavirus were already notorious for publishing false health content, including political sites whose embrace of conspiracy theories extends well beyond politics.
    • Avoid binary solutions: In both academic research settings and platform experiments, trying to label or block information based on being “true” or “false” has had unintended consequences. These include the backfire effect (people click on false content out of curiosity); false positives (people assume anything not labeled as false is true); and defiance (people share false information that supports their beliefs out of tribalism).
    • Prioritize misinformation: Even if just for resource management, misinformation should be prioritized for remedial action, such as labeling, downranking, or removal by platforms, based on its potential for harm and its degree of visibility or engagement.
    • Increase the salience of accuracy: When users are encouraged to think about accuracy before encountering misinformation, they are less likely to engage with or share it.

    What Digital Platforms Are Doing to Counter Pandemic Misinformation

    Before we talk about the efforts of digital platforms in deploying those and other strategies, let’s define our scope. Different companies may be characterized as platforms in different contexts. However, considerations about information quality and the media are generally seen as more specific to companies that engage largely in open (non-encrypted) information distribution such as Google, YouTube, Facebook, and Twitter. For this reason, the focus of this perspective is primarily on these platforms (although Public Knowledge has been tracking many others during the pandemic).

    Simply put, based on our tracking, the efforts of these platforms to counter misinformation about the pandemic far exceed anything they’ve done — or said they could do — in the past. In the process, they have undermined their own past arguments for an arguably lax approach, ranging from “It’s too hard!” to “Free Speech!” to “It’s not our job.” Their approaches have encompassed removing or downranking misinformation that doesn’t pass fact-checking; upranking and featuring authoritative content from recognized health authorities; creating and showing their own content panels using data from global and local health organizations; pausing or deleting accounts that repeatedly defy their standards (even when they belong to prominent people); banning exploitative ads to prevent price gouging and sales of fake or counterfeit protective supplies; and a full range of changes to user experience design, including nudging strategies and adding friction to sharing of content. The pandemic also created a forced experiment of the ability to use machine learning to moderate speech online, as several platforms sent their human moderators home to protect their health and comply with shelter-in-place orders — and they were unusually transparent in their warnings about inevitable mistakes. Lastly, several platforms have extended their existing philanthropic and partnership programs in funding quality local journalism.

    Besides their willingness to change their posture and user experience design in favor of content moderation, there’s one thing that has enabled these strategies: more extensive partnering with other organizations, whose authoritative content and information analysis has enabled them to check sources, up- and down-rank content, direct people who’ve experienced misinformation to debunking sites, and understand what kinds of misinformation may create the greatest harm.

    For example, under a prominent heading, “Ensuring everyone has access to accurate information and removing harmful content,” Facebook discusses its work with over 60 fact-checking organizations; its grant to the International Fact-Checking Network, among others; and how it uses similarity detection methods to identify duplicates of debunked stories. Facebook also added the Coronavirus (COVID-19) Information Center to its News Feed, which offers the “latest news and information as well as resources and tips” to stay healthy.

    At Google, under the headings “Protecting people from misinformation” and “Helping people find useful information,” the company asserts that “promoting helpful information is only one part of our responsibility.” It describes the company’s efforts to remove dangerous or misleading COVID-19 misinformation — that is, information not compatible with that published or reviewed by reputable sources — from its apps. An SOS Alert in Search connects people with the “latest news plus safety tips and links to more authoritative information,” including from the World Health Organization. The company is also leveraging its relationship with the International Fact-Checking Network, among others.

    YouTube, meanwhile, is using partnerships to “improve the news experience,” including by “raising up authoritative sources of information” and expanding the use of fact check information panels providing additional context from high-quality sources.

    And Twitter is accelerating its opaque process for verification (the blue checkmark) for accounts that are providing credible updates around COVID-19, labeling “manipulated media”, and removing content that could place people at a higher risk of transmitting COVID-19, including “Denial of expert guidance,” “Encouragement to use fake or ineffective treatments, preventions, and diagnostic techniques,” and “Misleading content purporting to be from experts or authorities.”

    A Policy Solution for Misinformation: “A Superfund for the Internet”

    In general, we give these platforms credit for their efforts during the coronavirus crisis thus far. We should all recognize that the platforms are having to adapt to enormous amounts of additional activity and strange new use cases. Moderation decisions that were difficult under the best of circumstances, with people responsible for them, are now being made largely by machines. Platforms that had big user bases now have huge user bases, making the exploitation of their speed and scale for spreading false information far more worthwhile. Plus, of course, our scientific understanding of the virus and how to counter it has changed substantially over time.

    But we got a preview of what may happen to all that good work after the crisis when Mark Zuckerberg said, in that same interview, that it was “hard to predict” how things would play out after the pandemic, and reiterated that the kind of threats posed by misinformation about the virus were “in a different class” (though six weeks later, he may not still feel that way).

    Given how much of the misinformation problem is generated through the pervasive reach, speed and power of digital platforms, we believe it is critical that the effective strategies described above become fully embedded with the major information distribution platforms. We would like to see the platforms themselves, accountable to independent expert bodies established through legal mandate, master the process of identifying, minimizing, and helping the public navigate misinformation — without interfering with Constitutionally-protected speech rights.

    Given the enormous and now proven value of information analysis to support public health and institutions, we can imagine, and are now developing, a solution in which platforms are compelled to invest much more in the tools and approaches that work. We’re thinking of this as a trust fund, or “superfund,” modeled on the 1980 Superfund for clearing toxic waste sites. Unlike other, similar concepts, (and here and here and here) though, we don’t believe a punitive “tax” on advertising revenue — which isn’t really the direct source of the problem — is the preferable approach. We favor an approach of value creation, since the pandemic has given us such a powerful model for its benefits. It has essentially created a market in which the platforms have more demand for — and journalistic organizations have more supply of — information cleansing services.

    The platforms should pay for these services to help to clear the toxic junk from their platforms, at a fair price. In doing so, we can provide an essential new revenue stream to local journalistic organizations and information analysts who also help protect our public and democratic institutions.

    Not “Back to Normal” for Misinformation 

    We’ve seen a serious and growing list of harms resulting from digital speech, particularly in contexts where the quality of information is of high stakes, where spread of mis- and disinformation is virulent and destructive, and where salience or engagement is high. We may never know how much of the platforms’ efforts to counter misinformation during the pandemic were part of a larger strategy to reverse “the techlash narrative” and the momentum toward regulation that had been building over the past few years. But we can still take the position that their efforts shouldn’t be reliant on their continued good will and philanthropy, and in the absence of oversight. Policy can be used to ensure the platforms continue to embrace official sources and reputable media outlets.

    Returning to that opinion piece from 2019:

    “[Fighting misinformation] is going to be something that government health organizations, medical professionals, social media companies and the traditional media are going to have to work out together. There isn’t any single solution; it will require many different interventions that will all need to work together… methods of visibly identifying false stories, the identification and deletion of fake posts and accounts, ways to promote official and accurate news, and so on.”

    Just like so many other aspects of life during the pandemic, we shouldn’t expect — or allow — the platforms to go “back to normal” when the crisis is over. As the whole world has gone online for working, learning, telehealth, and entertainment, the platforms’ power has only grown, and with it, their responsibility and accountability to the public. We need both a policy framework and specialized regulatory authority to limit their anti-competitive behaviors, protect Americans’ privacy, and to stop or slow the spread of disinformation and hate speech. A superfund for the internet, which also fosters reputable journalism, is the next step.