Posted on September 7, 2016

Math Is Racist: How Data Is Driving Inequality

Aimee Rawlins, CNN Money, September 6, 2016

In a new book, “Weapons of Math Destruction,” Cathy O’Neil details all the ways that math is essentially being used for evil (my word, not hers).

From targeted advertising and insurance to education and policing, O’Neil looks at how algorithms and big data are targeting the poor, reinforcing racism and amplifying inequality.

These “WMDs,” as she calls them, have three key features: They are opaque, scalable and unfair.

Denied a job because of a personality test? Too bad–the algorithm said you wouldn’t be a good fit. Charged a higher rate for a loan? Well, people in your zip code tend to be riskier borrowers. Received a harsher prison sentence? Here’s the thing: Your friends and family have criminal records too, so you’re likely to be a repeat offender. (Spoiler: The people on the receiving end of these messages don’t actually get an explanation.)

The models O’Neil writes about all use proxies for what they’re actually trying to measure. The police analyze zip codes to deploy officers, employers use credit scores to gauge responsibility, payday lenders assess grammar to determine credit worthiness. But zip codes are also a stand-in for race, credit scores for wealth, and poor grammar for immigrants.

{snip}

She started blogging–at mathbabe.org–about her frustrations, which eventually turned into “Weapons of Math Destruction.”

One of the book’s most compelling sections is on “recidivism models.” For years, criminal sentencing was inconsistent and biased against minorities. So some states started using recidivism models to guide sentencing. These take into account things like prior convictions, where you live, drug and alcohol use, previous police encounters, and criminal records of friends and family.

These scores are then used to determine sentencing.

“This is unjust,” O’Neil writes. “Indeed, if a prosecutor attempted to tar a defendant by mentioning his brother’s criminal record or the high crime rate in his neighborhood, a decent defense attorney would roar, ‘Objection, Your Honor!'”

But in this case, the person is unlikely to know the mix of factors that influenced his or her sentencing–and has absolutely no recourse to contest them.

Or consider the fact that nearly half of U.S. employers ask potential hires for their credit report, equating a good credit score with responsibility or trustworthiness.

This “creates a dangerous poverty cycle,” O’Neil writes. “If you can’t get a job because of your credit record, that record will likely get worse, making it even harder to work.”

This cycle falls along racial lines, she argues, given the wealth gap between black and white households. This means African Americans have less of a cushion to fall back on and are more likely to see their credit slip.

{snip}

And yet O’Neil is hopeful, because people are starting to pay attention. There’s a growing community of lawyers, sociologists and statisticians committed to finding places where data is used for harm and figuring out how to fix it.

She’s optimistic that laws like HIPAA and the Americans with Disabilities Act will be modernized to cover and protect more of your personal data, that regulators like the CFPB and FTC will increase their monitoring, and that there will be standardized transparency requirements.

And then there’s the fact that these models actually have so much potential.

Imagine if you used recidivist models to provide the at-risk inmates with counseling and job training while in prison. Or if police doubled down on foot patrols in high crime zip codes–working to build relationships with the community instead of arresting people for minor offenses.

You might notice there’s a human element to these solutions. Because really that’s the key. Algorithms can inform and illuminate and supplement our decisions and policies. But to get not-evil results, humans and data really have to work together.

{snip}