Microsoft Recognizes and Faces the Challenges of Facial RecognitionDecember 7, 2018
Yesterday, Microsoft released its facial recognition principles, as well as a call for legislation regulating the technology. There’s a lot to like in Microsoft’s release.
First and foremost, Microsoft recognizes that self-regulation is not enough and calls for “governments” (presumably at all levels) to regulate facial recognition technology. But Microsoft also recognizes that it, and other tech companies, cannot wait for governments to act and commits to implementing its facial recognition principles “by the end of the first quarter in 2019.” Microsoft also aims to create spillover effects for others using facial recognition technology: It is planning trainings and materials to help its customers “use this technology in a responsible way.”
Importantly, Microsoft appears to comprehend the biggest risks associated with facial recognition technology: bias and discrimination, privacy violations, and government surveillance with the concomitant risks to democracy and freedom. And its proposed solutions to two out of three of these risks are surprisingly strong.
Bias and Discrimination: Microsoft acknowledges that facial recognition technologies have disturbingly high error rates with regard to women and people of color and commits to working to identify and reduce those errors, as well as to helping its customers understand the limits of facial recognition technology and deploy the technology in ways that minimize the risk of bias and discrimination. But Microsoft, helpfully, goes further. It calls for ensuring that third parties are able to “test facial recognition technology for accuracy and risk of unfair bias,” as well as for bans on using facial recognition for unlawful discrimination (yes, we live in a world where this has to be spelled out), and, importantly, Microsoft insists on ensuring “meaningful human review.” The bone we might pick here is that it imagines this review only for “consequential cases.” While it suggests that the law will define that exactly “consequential” means, Microsoft is clear that it is thinking of situations “where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged.” This is a fairly broad list, but does it set the bar too high? For example, if facial recognition is used to inappropriately deny someone employment, is that violating her fundamental rights or impinging on her personal freedom? Based on Microsoft’s language, the answer is not actually clear.
Government Surveillance: Microsoft helpfully recognizes that facial recognition should not be used to track individuals in the public way absent legal process – we’re heartened to see that it prefers a probable cause warrant – or an emergency “involving imminent danger or risk of death or serious physical injury.” It also contemplates exceptions for exigent circumstances when no criminal activity is suspected, such as locating a missing person. Microsoft rightly draws parallels to Carpenter v. United States where the Supreme Court held, in the context of historical cell phone location information, that “an individual has a ‘legitimate expectation of privacy in the record of his physical movements.’” This is no less true when an individual’s movements are tracked using facial recognition technology than when they are tracked using cell site location information. After all, where you go says a lot about who you are.
But Microsoft is silent on other forms of government surveillance using facial recognition technology. Can the government use facial recognition technology to identify individuals on a one-off basis? What if they are not suspected of any crime? What if they are exercising their First Amendment rights at a protest or at a mosque? (This is not wholly conjecture – in the early 2000s, NYPD used automated license plate readers to identify congregants at local mosques; there’s no reason to believe they won’t use facial recognition next time.) What, if any, legal process is required here?
Furthermore, under what rubric can private companies voluntarily turn facial recognition technology over to the government? In other contexts, there are limits on voluntary disclosures to law enforcement; limits are appropriate here as well.
Privacy: Microsoft’s section on privacy leaves much to be desired. Microsoft suggests only that individuals should have notice when facial recognition is being used, and it advocates what more or less boils down to implied consent to facial recognition based on the assumption that individuals will see “conspicuous” notice and “vote with their feet.” Microsoft writes, “The law should specify that consumers consent to the use of facial recognition services when they enter premises or proceed to use online services that have this type of clear notice.” Notably, Microsoft sees no limits to this implied consent. If the only airport or train station in an area requires facial recognition at the door and individuals wish not to consent, do they lose their fundamental right to travel? What if the only store selling a particular product in an area requires facial recognition? Are individuals to go without? What if that product is essential?
Even if these concerns were worked out, notice and consent – particularly implied consent – is simply not enough – probably ever, but certainly not when we are talking about immutable biometric characteristics that individuals cannot change if they are compromised. Microsoft even seems to recognize this fact, although it makes no additional privacy proposals. It writes that “consent to use facial recognition services could be subject to background privacy principles, such as limitations on the use of the data beyond the initially defined purposes and the right of individuals to access and correct their personal data.” But it refuses to commit to these additional safeguards. We believe it should.
In fact, any entity that complies only with Microsoft’s proposal would be violating the law in Illinois and in Texas and ignoring best practices articulated in Federal Trade Commission guidance. In addition to notice and consent, these documents require strict data retention limits, limits on tertiary sharing of facial recognition data, limits on repurposing data for uses not contemplated in the initial consent, limits on data sharing with the government absent legal process, and robust security protections for stored facial recognition data. Microsoft should too.
We’re glad that Microsoft is actively engaging in this important conversation around facial recognition technology. We look forward to seeing what it looks like when it implements its principles by the end of Q1 2019. When all stakeholders – from the tech industry, to consumer protection groups, to civil rights and justice advocates – commit to proactive policymaking, we can build on voluntary efforts like this one. However, if we are going to strengthen these industry proposals, policymakers will have to hear from average citizens. Please contact the sitting and incoming members of Congress to tell them to pass robust comprehensive privacy legislation, as well as other important consumer protections around facial recognition and other emergent technologies.