Posted on January 29, 2019

Study Finds Racial Bias in Amazon’s Facial Recognition Tech

Michael Kan, PC Magazine, January 25, 2019

{snip}

A new study published on Thursday looked at whether the company’s Rekognition product can accurately identify a person’s gender. On photos of men, Amazon’s system was nearly flawless. But not so much when it came to women with dark skin tones: The Rekognition product misclassified their gender 31 percent of the time.

The study came from Joy Buolamwini of the MIT Media Lab and Deborah Raji of the University of Toronto. A year ago, the two researchers tested the facial recognition products from Microsoft and IBM, and found the systems also struggled to accurately identify the gender of darker-skinned women. In response, both Microsoft and IBM updated their facial recognition technology to reduce the error rates.

Buolamwini is now calling on Amazon to address the bias of the company’s Rekognition product, amid growing worries the technology is both error-prone and ripe for abuse. “In light of this research, it is irresponsible for the company to continue selling this technology to law enforcement or government agencies,” Buolamwini wrote in a separate blog post.

However, Amazon is dismissing her study, calling the results inaccurate. This is because the researchers were using the Rekognition product to conduct “facial analysis” as opposed to true “facial recognition,” according to Matt Wood, the general manager of artificial intelligence at Amazon Web Services.

In facial analysis, the computer system is trying to assign generic attributes to a picture, such as whether the person shown is wearing glasses, has a mustache, or may be female. Recognition is different; it focuses on trying to find matching photos of a particular face, like scouring through a large collection of mugshots and plucking out the ones that look like you.

{snip} Wood said in a 1500-word blog post on Saturday addressing the study, “Trying to use facial analysis to gauge the accuracy of facial recognition is ill-advised, as it’s not the intended algorithm for that‎ purpose.”

{snip}

But Buolamwini is pushing back against Amazon’s claims. “If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free,” she wrote in her own 3000-word blog post, which responds to the company’s criticism of her study.

{snip}

Their study arrives as dozens of civil rights groups and even Amazon employees have spoken out against the company’s plans to sell the facial recognition software to government agencies. A major worry is that the same technologies will one day power mass surveillance and unfairly discriminate against people of color. Later this spring, a group of Amazon investors plan on protesting by calling for a shareholder vote demanding the e-commerce giant halt the sales until the technology has been proven to be safe.

In response to the concerns, Amazon has said, “This technology is being implemented in ways that materially benefit society, and we have received no indications of misuse.” If abuse is ever detected, the company has pledged to ban the offender from using Amazon’s software again.

According to the e-commerce giant, police have been using Rekognition to quickly spot criminal suspects in captured security footage. {snip} The same system has also helped investigators find new leads on abducted persons by scanning the internet for new photos of the missing victim.

{snip}