Posted on October 1, 2019

Government AI Rules to Require Diverse Teams to Prevent Racist and Sexist Algorithms

Olivia Rudgard, Telegraph, September 20, 2019

The Government is to pilot diversity regulations for staff working on artificial intelligence to reduce the risk of sexist and racist computer programs.

New guidelines state that teams commissioning the technology from private companies should “include people from different genders, ethnicities, socioeconomic backgrounds, disabilities and sexualities.”

The UK is the first country to test the guidelines for government use of the technology, which were drawn up by the World Economic Forum with input from the Government’s Office of AI and private companies including Deloitte and Salesforce.

Many AI programs have attracted criticism for discriminating against certain groups, particularly women and ethnic minorities.

In October last year Amazon was forced to scrap a recruiting tool which was designed to automate the company’s search for new hires when it realised it had been favouring men.

The algorithm was trained on CVs previously submitted to the company, which were more likely to be from male candidates.

A study published last week by Government advisers the Centre for Data Ethics and Innovation found that police officers were concerned about potentially biased AI tools which might discriminate against ethnic minorities or poor people.

Officers suggested that using police data to train tools for stop and search might reproduce ingrained police prejudice and lead to discrimination against young black men.

Eddan Katz, AI project lead at the World Economic Forum, said that AI projects often reproduce prejudices or blind spots in teams of people who were all from similar backgrounds.

He said that recent incidents such controversy over a list drawn up by the magazine Forbes of America’s 100 most innovative leaders which included only one woman showed the importance of having diverse teams.

The list was compiled by a methodology developed by three men, and prompted a backlash from female business leaders.

“It’s not surprising that there were no women on that dev team,” he said. “Someone inside Forbes should have noticed that that’s kind of weird.”

As well as diversity of personal attributes, he said, teams should have people with different professional backgrounds, including ethicists and experts in civil liberties.

“It’s not enough to just have computer scientists,” he added.

The guidelines are expected to increase the adoption of AI in the public sector. Mr Katz said civil servants were currently hesitant to introduce such schemes because they were worried about the risks of the technology.

“A lack of expertise regarding the technology and socio-economic impacts is also a major contributing factor,” he said.

The Department of Transport, the Department of Defense, the Home Office, and local government are expected to test the guidelines in pilots next month.

Minister for Digital Matt Warman said: “These new guidelines place the UK at the forefront of procuring AI and will help the public sector better serve the public, make it easier for firms bidding for new contracts and set a world standard in how governments work with artificial intelligence.”