Application of the “Diversity Principle” in Content Moderation
Application of the “Diversity Principle” in Content Moderation
Application of the “Diversity Principle” in Content Moderation

    Get Involved Today

    The promotion of diverse viewpoints has been the cornerstone of United States media policy over the last 100 years. In November 2018, Facebook CEO Mark Zuckerberg published an article that delineated the algorithm that Facebook will use to disincentivize hate speech. Although Zuckerberg’s proposal is a laudable step for content moderation, it may be neglecting the value of exposing people to diverse views and competing sources of news. As we debate moderation issues, platforms should consider not only the prohibition of hate speech, but also the affirmative exposure to broader ideas and perspectives. The Federal Communications Commission’s implementation of the diversity principle on radio and TV, explored below, offers some valuable lessons here.

    Facebook’s New Algorithm

    Zuckerberg frames the problem for platforms as how to balance the promotion of free speech against protection from hate speech, while bringing people together as a community and promoting “healthier, less polarized discourse.” As anyone involved in the debate on content moderation will recognize, while some content is clearly outside the normal boundaries of robust debate and moves into the affirmatively threatening, harassing, or dehumanizing, a great deal more content resides in a gray area, which is potentially offensive but also potentially protected by our normative values of free speech that extend to unpopular or controversial views. Zuckerberg proposes to address this gray area by using Facebook’s algorithm, which will downgrade content the closer it gets to the line of prohibited content. That is, the more the specific content raises red flags, the less Facebook will promote that content in users’ news feeds. At the same time, however, someone intentionally looking for or following a controversial speaker will still be able to find the controversial speech.

    A Step Further in the Algorithm

    Using an algorithm to reduce the active dissemination of potentially harmful content to a broader audience should help to address the problems of radicalization, the spread of false stories and conspiracy theories, and accidental exposure to hateful, revolting, or otherwise painful speech. This approach — downgrading borderline content that does not warrant outright deletion — correctly reflects the part of social media platforms that is more like electronic media and less like common carriers. So Facebook, and other social media platforms, should take this a step further. Not only should they consider what content to take down or downgrade without sacrificing the values of free speech, but they should also experiment with other ways to address hyper-partisanship, radicalization, and social division. Specifically, responsible services should focus on more affirmative solutions to the problem of “filter bubbles,” which is closely tied to the tendency of algorithms to promote increasingly extreme content as a means of increasing engagement.

    As explained by MoveOn.org founder Eli Pariser back in this TED Talk, filter bubbles form because the algorithms that govern search and social media news feeds are designed to learn what sort of content makes us respond. The result, warns Pariser, is that instead of a “balanced information diet,” we are increasingly surrounded by “information junk food” in a way that undermines our ability to have sufficient news to understand what is going on in the world around us. As the influence of search and social media have grown, we have become more aware of how these systems segment us into echo chambers, blinding us to those who disagree and reinforcing our own biases. As one author noted, so many people were surprised when President Trump was elected because their filter bubbles screened out news stories or personal feeds that would have suggested such an outcome. Others have linked algorithmic selection based on past likes and views to violent radicalization. For example, YouTube “promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.” Searches on vegetarianism leads to content on veganism, and searches on jogging leads to content on ultramarathons.

    This dynamic should not be confused with users intentionally selecting certain content, like a decision to read only right-wing or left-wing news sites. The difficulty of filter bubbles is the way in which engagement-driven recommendations shape what information and points of view are even presented to us. Filter bubbles are not about a deliberate decision to choose one source of news over another, or a conscious choice to favor one perspective to the inclusion of all else. They are primarily about how algorithms exclude news sources and perspectives from our awareness without our knowledge, let alone our consent. Loosening the hold of these algorithmic filter bubbles is as important to restoring civic discourse as downgrading the most extreme forms of hate speech or fake news.

    Our Long National Policy of Promoting Diversity in Electronic Media to Combat Filter Bubbles

    Diversity has always been an important value within the context of electronic media. In 1960, the FCC released a Programming Policy Statement, in which the FCC required broadcasters to ascertain and serve diverse community programming needs, and provide program plans that will serve those needs. Once a licensee’s ascertainment effort is completed, the FCC would accord broad authority to the licensee to provide diverse programming. This process was based on the FCC’s belief that, absent an incentive to discover and serve minority tastes and interests, “only wholly inoffensive, the bland, could gain access to the radio microphone or TV camera.”

    The diversity principle has been vigorously applied in the FCC’s regulation over electronic media. One major example are the PEG (Public, Educational, and Governmental Access) Channels, set by the FCC. Pursuant to Section 611 of the Communications Act, local franchising authorities may require cable operators to set aside channels for public, educational, or governmental use. The Supreme Court determined that cable operators generally may not control the content programming on PEG channels. Similarly, must-carry rules, first instituted by the FCC in 1965, require cable operators to set aside a specified portion of their channels for local commercial and non-commercial television stations. As such, it is apparent that the FCC has long been concerned with affirmatively promoting exposure to diverse content, in the U.S.’s 100+ years of history in electronic media. Therefore, the question around regulating social media should not be approached as an entirely novel idea, but rather a natural evolution from the history of regulation on electronic media.

    What Would the Diversity Principle Look Like for Platform Regulation?

    By feeding in content that is similar to the sensationalized content but is less violent or politicized, and still within the zone of the user’s interest, we would be dispersing the attention that is given to the most extreme content. A diversity principle for platforms would look something like the visual above, where within the bell curve encompassing all content, we would steer away from focusing on the bell curve’s highest point that draws maximum engagement; rather, we would promote a more “diverse” viewpoint that draws less engagement than the highest point, but one that still draws enough significant engagement. Similar to Facebook’s current algorithm, the amount of user engagement is used as an explicit proxy for what should or should not be promoted, but the diversity principle takes a more expansive approach by affirmatively feeding in alternative content that is worth promoting. In other words, rather than penalizing borderline-violent, gruesome, politicized, or sexually explicit content, we would be promoting more content at the other ends of the zone of user’s interest, which is the shaded area of the visual above, to enhance diversity and inclusion of ideas that may not be as viral, but equally or more informative and educational.

    Even if the diversity principle is not adopted, future regulatory regimes dealing with digital platforms should find a way to apply a “public interest” analysis to these filter bubbles. The standard is already used by the FCC for merger review, and it requires consideration of First Amendment concerns for diversity of free expression in forms of electronic media. Although Facebook’s current algorithm is worthy of praise as a step in the right direction, Facebook may improve in effective content moderation by adhering to the public interest standard that includes alternative viewpoints in “trending” conversations on its platform.

    Conclusion

    The diversity principle serves a broader goal beyond just preventing radicalization. It would also direct platforms into the tried-and-true media regulation policy of ensuring that a variety of perspectives are represented in the digital public square. This would also be in keeping with Facebook’s goal for community standards to “err on the side of giving people a voice while preventing real world harm and ensuring that people feel safe in our community.” If people are exposed to more content, they would perhaps be less likely to go down the path of radicalization, with a wider range of viewpoints in their feeds.