Post

Are You Wielding a Discriminatory Algorithm Against Consumers? The FTC Is Coming For You — But It Can’t Act Alone

April 29, 2021 , , , ,
graphic of computer nodes with the letters ai

Last week, the Federal Trade Commission released a blog post entitled, “Aiming for Truth, Fairness, and Equity in your company’s use of AI.” It’s making waves. The responsibilities the FTC outlines for companies, like being transparent, non-discriminatory, fair, accurate, and able to explain automated decisions to consumers, were all included in the agency’s blog post last year on the very same subject. What’s new is the tone. The FTC has explicitly stated that the creation or use of a discriminatory algorithm could be considered unfair under Section 5 of the FTC Act. This means the FTC has moved from just providing guidance to making clear that enforcement is on the way. If companies aren’t mitigating their algorithms’ harmful effects, “the FTC may do it for [them].”

It’s great to see the FTC’s willingness to exercise its Section 5 authority, instead of waiting for Congress to give it permission to go after creators and buyers of unfair or deceptive algorithms. The focus on racial justice, something Acting Chairwoman Slaughter has championed, is especially relevant in the context of AI. Use of automated recommendation tools is prevalent in everything from hiring and college admissions to credit, insurance, and even housing determinations; and the discriminatory harm to consumers was highlighted in the FTC’s blog. Those very same areas receive heightened scrutiny under civil rights laws. Focusing on use cases that are already illegal under civil rights laws has the benefit of maximizing the Commission’s limited enforcement resources on uses that have already been recognized as doing egregious harm.

The FTC has drawn a line in the sand. It’s time for the Biden administration and Congress to join the fray if we’re going to protect consumers in this emerging space.

The FTC Should Move to Create Binding Rules

As of this writing, the FTC has not proposed any rules that expressly regulate automated decision-making algorithms or AI. In both blog posts, the FTC cites the statutes the agency enforces, past cases that they have settled, as well as reports and workshop findings as the basis for the guidance they created. All of those sources, except for the statutes, are not binding on companies. If the FTC does bring a case alleging a company was not transparent or its algorithm not explainable, then it’s not certain that a court would share the FTC’s interpretation of companies’ responsibilities for using AI. 

This means that the FTC will probably have to engage in rulemaking to make their guidance documents, like blog posts, binding. Fortunately, Acting Chairwoman Slaughter has prioritized rulemaking and even created a new group within the FTC to help streamline the process. And while the FTC’s Magnusson-Moss rulemaking authority is generally believed to be overly burdensome and too slow a process to create meaningful rules, in this instance, the FTC’s expertise with the subject matter as well as the rule’s timeliness and political necessity may help to speed the process along. Even if comprehensive AI regulation is not in the cards, doing rulemaking on a more narrow subject like data collection and use could change the landscape significantly.

The Biden Administration Should Coordinate Agency Action on Tech and Civil Rights

The administration has made it clear that promoting racial equity and supporting underserved communities will be a priority. As part of that ongoing goal, the administration should be acting as a convenor and coordinator of action that advances its civil rights agenda.

As noted above, the types of automated decision-making systems that the FTC is particularly concerned with are in sectors of the economy that are covered by civil rights laws. But those laws are enforced by a variety of agencies and commissions like the Equal Employment Opportunity Commission, Department of Housing and Urban Development, and the Department of Justice. At a bare minimum, agencies should be informing each other of enforcement actions that could affect multiple agencies’ civil rights missions. What would be more useful is if the administration convened a task force through the White House’s Office of Science and Technology Policy. The task force, as described by Laura Moy and Gabrielle Rejouis in their Day One Project paper, would be responsible for coordinating agency action that occurs at the intersection of technology and civil rights. The task force would make it easier for agencies to share expertise, develop enforcement strategies, and proactively regulate this emerging space. 

Congress Must Restore the FTC’s Ability to Protect Consumers Through Restitution

Congress must ensure that the FTC has sufficient tools to both regulate this emerging market and enforce the laws against bad actors. The Supreme Court just decided that Section 13(b) of the FTC Act does not grant the agency the authority to seek equitable monetary relief. That includes such remedies as restitution and disgorgement. For the past 40 years, the FTC has used Section 13(b) to not only enjoin companies from continuing their offending behavior, but also to get restitution for victims as well as strip companies of their ill-gotten gains. With this decision, the FTC can no longer actively protect consumers without going through significant administrative hurdles. Congress must act to preserve the easy availability of these types of remedies for the FTC. 

The recent Everalbum settlement shows how important it is for the FTC to retain flexibility in seeking remedial measures against companies. Everalbum was a photo app that had a “friends” feature that would group users’ photos and provide tags using facial recognition. Everalbum made promises that facial recognition would only be used as part of those specific features and that it would only run facial recognition on users’ photos if they affirmatively opted in. Those promises were lies. Facial recognition was turned on by default, and users’ photos were made part of datasets used to train Everalbum’s facial recognition technology. Beyond requiring deletion of all facial recognition data from users who did not expressly consent, the FTC also required Everalbum to delete any models or algorithms created with that data. This pro-consumer resolution would not have been possible if the FTC was limited in the relief it could seek.

While it is important that the FTC continue to regulate emerging AI technologies, the agency notes that these systems are already being integrated into fields like healthcare, finance, media, business, and more. The ubiquity of these types of tools is fast approaching. The EU has already proposed new AI rules, as well as a new oversight body to go with it. Although I don’t believe an AI-specific regulator is necessary, a broader digital regulator would be extremely beneficial.

Public Knowledge has made the case for why a digital regulator is necessary to enable a broad swath of goals (like promoting competition or protecting privacy). In this context, the digital regulator could act as the expert agency for all things AI-related. This means the regulator would be able to hire technologists, lawyers, and behavioral science researchers — all of whom represent expertise necessary to create nuanced AI regulation. The digital regulator would be able to test and audit specific algorithms for their discriminatory impact and general fairness. The digital regulator could also create rules on transparency and explainability that would cut across sectors. But, more importantly, the digital regulator could advise other agencies on sector-specific AI rulemakings, as well.

The FTC has taken its first step toward a more robust approach to AI and automated decision making. Soon, the increasing number of companies wielding discriminatory algorithms against consumers may have to answer for their actions — but only if Congress and the Biden administration step in. Without this support, consumers — especially the vulnerable, marginalized people most likely to suffer the consequences of unfair algorithms — will be at the mercy of biased AI. We need a strong FTC to regulate this sector as these algorithms gain ground and further establish themselves in our lives. And the clock is ticking.

 

 

 

Image credit: vpnsrus.com


About Sara Collins

Sara Collins joins Public Knowledge as a Policy Counsel focusing on all things privacy. Previously, Sara was a Policy Counsel on Future of Privacy Forum’s Education & Youth Privacy team and specialized in higher education. She has also worked as an investigations attorney in the Enforcement Unit at Federal Student Aid, as well as the Director of Legal Services for Veterans Education Success. Sara graduated from the Georgetown University Law Center in 2014, where she was the symposium editor of the Journal of Gender and the Law. After graduating law school, she completed a Policy & Law Fellowship at the Amara Legal Center, an organization dedicated to fighting domestic sex trafficking within the DMV area. Originally from Chicago, Sara attended the University of Illinois, where she received a B.A. in both Political Science and English.