Posted on February 13, 2018

Facial Recognition Is Accurate, If You’re a White Guy

Steve Lohrfeb, New York Times, February 9, 2018

Facial recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph.

When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.

Facial recognition algorithms made by Microsoft, IBM and Face++ were more likely to misidentify the gender of black women than white men.

In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women.

One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.

{snip}

Facial recognition technology is lightly regulated so far.

“This is the right time to be addressing how these A.I. systems work and where they fail — to make them socially accountable,” said Suresh Venkatasubramanian, a professor of computer science at the University of Utah.

{snip}

In her newly published paper, which will be presented at a conference this month, Ms. Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classification features in their facial analysis software — and their code was publicly available for testing.

She found them all wanting.

To test the commercial systems, Ms. Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office. The sources included three African nations with predominantly dark-skinned populations, and three Nordic countries with mainly light-skinned residents.

{snip}

{snip} The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.

Ms. Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women.

{snip}

“There is a battle going on for fairness, inclusion and justice in the digital world,” Mr. Walker [Devin Walker, president of hte Ford Foundation] said.

Part of the challenge, scientists say, is that there is so little diversity within the A.I. community.

{snip}

Technology, Ms. Buolamwini said, should be more attuned to the people who use it and the people it’s used on.

“You can’t have ethical A.I. that’s not inclusive,” she said. “And whoever is creating the technology is setting the standards.”