The Future Is Here
We may earn a commission from links on this page

Amazon Accidentally Makes Rock-Solid Case for Not Giving Its Face Recognition Tech to Police

Days after the ACLU released a damning report on Amazon’s face recognition product Rekognition, Amazon’s general manager of AI, Dr. Matt Wood, countered its findings in a blog post. The ACLU used Rekognition to scan the faces of all 535 members of Congress, finding the software mistook 28 of them for suspected criminals. Dr. Wood notes first that the ACLU doesn’t reveal its methodology or dataset in the report, then punctuates Amazon’s original response—that it encourages higher confidence thresholds for law enforcement.

Conspicuously missing from the blog, however, was a specific rebuttal to the enormous racial disparity uncovered by the ACLU. For Congress as a whole, the error rate was only five percent, but for non-white members of Congress, the error rate was 39 percent. When one centers this racial disparity, as the blog post does not, Dr. Wood’s final point reads as an argument against face recognition use by law enforcement wholesale.

Advertisement

He writes:

“In addition to setting the confidence threshold far too low, the Rekognition results can be significantly skewed by using a facial database that is not appropriately representative that is itself skewed. In this case, ACLU used a facial database of mugshots that may have had a material impact on the accuracy of Rekognition findings.”

Advertisement

Let’s break this down.

If an agency is scanning a group of people against a database, the makeup of the database will, of course, impact the result. A database of faces sourced from Guadalajara probably won’t be useful when scanning against faces in a mall in Wisconsin. Dr. Wood argues that the arrestee database the ACLU scanned congress members against could itself have been skewed or misrepresented.

Advertisement

But, here’s what we know about the faces of the people being arrested in America: They skew towards being darker. For congresspeople, they skew lighter. If the overrepresentation of darker-skinned people in mugshot databases played a part in the staggering racial disparity in the ACLU’s findings, the same argument can be made nearly any time law enforcement scans people against “skewed” criminal databases.

Wood’s blog reflects the difficulty across Silicon Valley in grasping the way statistical biases (the over/under representation of varying skin tones impacting accuracy) overlaps with racial biases (face recognition used to abet prejudiced police systems). They feed each other. Darker skinned people are arrested more, appear in criminal databases more, and thus are matched more. This overrepresentation is extant and long predates the normalization of face recognition in law enforcement.

Advertisement

Frankly, it’s an argument against all face recognition, because criminal databases are not, and never have been, perfectly representative of either the U.S. at large or members of congress.

The blog ends by proposing safeguards for face recognition in the hands of law enforcement, similar to those argued by Microsoft president Brad Smith. First, Amazon recommends law enforcement only act on matches with 99 percent confidence levels, higher than the 80 percent used in the study. Smith similarly proposed a minimum threshold for law enforcement. These are recommendations, however, not rules. Couldn’t Amazon refuse to renew contracts for agencies that operate using lower thresholds? Why rest at simply a “recommendation?”

Advertisement

Finally, the post ends by saying Rekognition should be “one input across others that make sense” and not replace human judgment but “allow humans to expeditiously review and consider options using their judgment (and not to make fully autonomous decisions).” Sure. That’s very reasonable—unless you’re the person having to explain to an officer that you aren’t who the computer says you are.

The assumption of hyper-rationality and prejudice-free thinking in every officer with access to this technology speaks to a larger misunderstanding of how race, bias, and technology intersects, with the price ultimately paid by the very people ignored in these types of defenses.