Posted on October 25, 2019

Biased Algorithm Is Delaying Healthcare for Black People in the US

Jessica Hamzelou, New Scientist, October 24, 2019

Black people in the US may be missing out on healthcare because a widely used algorithm is racially biased. The proportion of black people referred for extra care would more than double if the bias were removed, according to new research.

Algorithms are fast becoming a key part of healthcare. {snip}

One example is an algorithm that is used to predict the future health of individuals based on their past health records. Once the algorithm is fed data about a person’s diagnoses, prescriptions and procedures, it spits out a number that predicts the cost of the person’s future healthcare.

Hospitals, healthcare systems and some health insurance providers use this score to identify people who are likely to need more care in the future. Those who have the highest predicted costs can be referred for extra medical care to help prevent them getting sicker, says Obermeyer.

Obermeyer and his colleagues wanted to find out more about how the algorithm worked. “Along the way, we noticed that there was this fairly stark difference in the risk scores that black and white patients had at the same level of health,” he says.

Ahead in line

The team found that black people assigned the same score as white people went on to have worse health outcomes. “You can think of it as healthier white patients being put ahead in line of sicker black patients when it comes to allocating enrolment into this programme,” says Obermeyer.

When the team ran a simulation that eliminated the bias, the proportion of people referred for extra treatment who were black increased from 17.7 per cent to 46.5 per cent. {snip}

This doesn’t mean that the algorithm isn’t working, says Obermeyer. “The data are just a reflection of the society that produced the data,” he says. “Black patients will generate lower costs than white patients, and that’s because of a variety of socioeconomic factors related to access to healthcare, as well as direct effects of race on the doctor-patient relationship.”


Significant and prevalent

Bias in algorithms is likely to be “significant and prevalent”, says James Zou at Stanford University in California. “It’s hard to put a number on it, but this could affect a large fraction of the [US] population,” he says.


{snip} Many other companies and academics have developed similar algorithms, and none of them have realised they may be biased, says Obermeyer.

The hospitals, healthsystems and government agencies using the algorithms haven’t either. “It’s tempting to think they should have known better, but nobody knew better,” says Obermeyer. Regulations that require manufacturers to audit their algorithms might help in future, but users must also take responsibility for making sure the algorithms they use aren’t biased, he says.