Oakland is the third city in the US to ban facial recognition, after San Francisco, CA and Somerville, MA. A campaign has been launched to ban it nationwide in the US, and now, a bill is being introduced to do just that. Recode reports “two top lawmakers, Rep. Elijah Cummings (D-MD) and Rep. Jim Jordan (R-OH), plan this fall to introduce a new bipartisan bill on facial recognition.”
This must be frustrating to retailers and others who want to get the benefits of this powerful technology. People want to use it for valuable reasons—personalized shopping experiences, reduced friction at store check out and providing better experiences for travelers. But they might have to wait. There are legitimate concerns with the use of this technology – AI makes mistakes and, in the case of its use in law enforcement, there is evidence that there is not sufficient understanding of the consequences of AI error, whether it be bias in the training data, bias in the collection of new data or dealing with inaccuracy, within government and local authorities.
Facial recognition technology is not only powerful, it’s inexpensive and easy to deploy. The required level of technical expertise is not high. A recent Washington Post article discussed how Oregon became the testing ground for Amazon’s Rekognition technology and how it became so pervasive, so rapidly. It is “easy to activate, requires no major technical infrastructure, and is offered to virtually anyone at bargain-barrel prices. Washington County spent about $700 to upload its first big haul of photos, and now, for all its searches, pays about $7 a month.”
This raises a legitimate question for regulators: is this all moving too fast? Does a technology this powerful need better guard rails, defined in specific legal requirements? Perhaps. But we’d assert that many facial recognition systems simply need better, human-centered design, including better governance and consideration paid to privacy, accountability and feedback.
With AI, there are always tradeoffs, so it’s even more important to have techniques to help people make these important decisions. For example, if accuracy is the primary problem, what’s the best solution? One option is to make design choices that allow more false negatives, so fewer false positives. While this may have the effect of degrading the usefulness of the system as a definitive “source of truth,” it puts more onus on the users to be critical about what the AI tells them and can reduce automation bias.
Perhaps one creative solution could involve adding features to the product that give people choice and control, such as an alert that they can act upon should their face be surfaced or queried and a process that demonstrates that people who operate the system are available and accountable for exception management. After all, individuals are owners of their faces and design should intuitively reinforce this. While this feature doesn’t solve every problem and isn’t practical in every system, it could go a long way to establishing trust in some systems. To date, this kind of feedback feature simply doesn’t exist.
Feedback and control features are more costly and would change the risk profile for the AI system developers, but isn’t that the point? It would involve more people with more perspectives, but it might yield a significantly better result. Without good governance design, people don’t know who is accountable or what choices they have to opt in or out. They also don’t have any explanation of the system and they can’t control or query bias in the data or the overall performance of the system.
AI designed without the involvement of the people it acts on risks push back. And once the stakes are high enough, as is the case with facial recognition, ultimately invites a slowdown or shutdown—driven either by internal people or external people, via regulation. If the companies developing and using AI can’t protect users, then it’s reasonable to expect law makers to step in.
Facial recognition technology need not be a surveillance and privacy disaster. Its benefits are real and, with the right choices and controls, it can be a valuable technology. But it’s been designed and deployed in ways that violate our sense of fairness and do not increase trust or address the fundamental human desire for choice and control.