The Election Misinformation War Has Only Just BegunNovember 12, 2020
Media organizations and social media platforms approached preparing for the 2020 presidential election as if they were going to war. Informed by COVID-19 misinformation conspiracies, demands from both sides of the political aisle to adjust their content moderation policies, and at least four years of orchestrated political misinformation campaigns, our mainstream media and social media networks had plenty of time to build a defense against election misinformation. Multiple civil society groups prepared comprehensive trackers and scorecards and roadmaps to help in this endeavor, often scouting for potential weaknesses and providing policy “armor” for the media and platforms to quickly and effectively respond to misinformation. If the fight against misinformation is a war, then the bad actors waging it conveniently drew the media an invasion map for this election.
How have we fared so far?
The Election Day Battle
Our information defenses held up well leading up to Election Day, enabling a smooth voting process for the vast majority of Americans. Despite some misinformation shrapnel flying around about mail-in and early voting, the potential impact of COVID-19, and improper conduct at polling stations, voting proceeded with minimal disruption and with record numbers of people participating. This implies that we’ve successfully managed the first two probable battles in the election misinformation war: procedural interference, or misleading information about actual election procedures; and participation interference, or content that deters people from voting. And two of the major combatants from 2016 — commercial troll farms creating fake news and foreign actors attempting election interference — seemed to play minor, if any, roles for the 2020 election. Instead, the major threat in this war has been domestically produced misinformation planned and seeded by partisan political actors, including the President of the United States.
We learned, as we did in the early months of the COVID-19 pandemic, that in a high-stakes situation — where the potential for harm is high and the number of credible sources is finite — platforms can and will develop new standards and solutions to problems they previously claimed were impossible to solve. Several platforms used proactive posts and custom content to warn that election results would be delayed due to mail-in and early voting (since in some states these could not be counted until Election Day) and directed users to sources of accurate, real-time information. Most had distinct strategies for the likeliest misinformation battles, and they accounted for “shades of grey” with strategies like adding their own informational content, introducing friction to make sharing more considered, and algorithmically demoting content instead of sticking to a “leave up/take down” binary.
Many of the platforms’ most effective approaches were enabled, as they have been for COVID-19, by partnering with authoritative sources of information (for example, Facebook, Twitter, YouTube, and TikTok all announced in advance what source they would use for calling election results). We saw an increasing amount of evidence that fact-checking and labeling content can reduce the spread of misinformation. Several platforms expanded or strengthened their approaches as they encountered new challenges during the week — a rapid iteration we have rarely seen from the platforms before. And researchers and academics provided real-time artillery spotting, calling out content for removal, page owners that represented threats, and the potential harms of content for the platforms and media outlets. The platforms’ efforts were far from perfect — YouTube, in particular, proved to be as vulnerable as the experts predicted — but we seem to be on the other side of an inflection point in terms of the platforms taking accountability for the content on their platforms.
But we’ve also learned again that our contemporary news and information ecosystem is complex and interconnected, with quality information and misinformation flowing across both legacy and digital media. Often, misinformation originates on fringe websites, gets picked up and amplified by partisan media on television and radio and in newspapers, and then spun back out online. We saw more aggressive efforts by the major outlets to short-circuit this dynamic. For example, there were robust efforts in mainstream as well as social media to “pre-bunk” the likelihood of an extended vote counting process. Major media generally exercised caution in their reporting of results given the complexity brought about by massive numbers of citizens voting early, as well as the scars they still bear for their premature conclusions in 2000 and 2016. And we saw the major broadcast networks, including ABC, CBS, NBC, CNBC, MSNBC, and Fox News, break into live press conferences for real-time fact-checking when the president or his press secretary made claims about fraudulent votes and a stolen election.
That’s because by the end of Election Day, the war on election misinformation had moved to a new front: The extended vote counting period, as expected, opened up a battle against misinformation having to do with delegitimization of election results. Within hours after the polls closed — in fact, even before — we heard from the president and his supporters about rigging or tampering with election results, issues with vote-counting or tabulation, false claims of victory, and calls to action, including organized action at polling sites. We can expect this battle to continue through what may be protracted legal action to challenge the results, including at the Supreme Court of the United States.
Winning the Battle, But Not the War
But we cannot expect the election misinformation war to end even when results have been litigated and settled and the transition is underway. The war will continue; in fact, the election itself may have been only a skirmish.
That’s because over the past four years, the president and his supporters have created an alternate reality, propagated by a “right-wing propaganda feedback loop” that positions Donald Trump as the true voice of the people. They have spent months laying the groundwork for a false narrative about voter fraud and the theft of an election that threatens to delegitimize the Biden administration in the eyes of their followers — all 72 million of them. In the meantime, downloads of Parler, the unmoderated social media platform favored by conservatives claiming that fact-checking is censorship on other platforms, are exploding. Other alternative platforms also saw dramatic increases. There are rumors that President Trump may start, buy, or license his name to a national broadcast network. In short, this misinformation war isn’t over; the primary combatants will just relocate.
Even without a “Trump TV,” misinformation will appear on new fronts. As Facebook and Twitter clamped down, misinformation about this election migrated to smaller platforms with less robust defenses and fewer resources, including Pinterest, NextDoor, Snapchat, and TikTok. Several media outlets reported higher volumes of misinformation flowing through private groups, robocalls, and text messages. And even the major platforms’ defenses against non-English misinformation remain weak, as evidenced by the amount and possible impact of Spanish-language misinformation on results in Florida.
Against this backdrop, we shouldn’t rely on the platforms alone to decide if or when to intervene in misinformation wars that threaten our democracy. We need policy solutions that can scale and will buttress and sustain these efforts. We offer a creative, First Amendment-friendly solution, modeled on the Environmental Protection Agency’s 1980 Superfund to clean up toxic waste sites: a “superfund for the internet.” Our proposal calls for the platforms themselves, accountable to an independent body established through legal mandate, to master the process of identifying, minimizing, and helping the public navigate misinformation — without interfering with constitutionally-protected speech rights. In doing so, we can provide an essential new revenue stream to local journalism organizations, in the form of a new product or service offering that is consistent with their existing journalism and fact-checking standards.
No one market adjustment can totally end “the misinformation wars.” In a medium like the internet where everyone is empowered to speak, everyone shares responsibility for fighting misinformation. However, we believe the “superfund for the internet” can be part of a set of interventions, one that reflects this interconnected information ecosystem as well as a growing body of evidence on what actually works, in practice, to counter the spread of misinformation.
About Lisa Macpherson
Lisa is a Senior Policy Fellow, focused on countering misinformation on the internet and developing alternative business models for local journalism. Prior to Public Knowledge, Lisa was a consumer marketing executive at Fisher-Price, Timberland, Hallmark, and Custom Ink, and an independent marketing consultant at Pernod Ricard. Her experience driving digital marketing transformation on behalf of brands led to concerns over the broader impacts of digital technology on individual well being, civil society, journalism, and democracy. She applied to the Advanced Leadership Initiative at Harvard University, where she is now a Senior Fellow studying how to mitigate the negative externalities of digital technology. Lisa is a current or past member of the Association of National Advertisers, Marketing 50 (M50), and the Marketing Leadership Council of the Conference Board, and a founding member of the Council of CMOs of the Conference Board. In 2017 she was selected as one of the D.C. Techweek 100, which recognizes excellence in technology and entrepreneurship in the DC area. Lisa received her B.A. from Colgate University and her M.B.A. from the State University of New York at Buffalo. She was raised near Boston, MA, and loves to travel, read, cook, and spend time with her daughter, Kelsey.