Posted on September 4, 2019

Making Equitable Access to Credit a Reality in the Age of Algorithms

Miriam Vogel, The Hill, August 30, 2019

Last week we saw yet another reminder of the ways algorithms will perpetuate historical bias if left unchecked and unrestrained. The Department of Housing and Urban Development’s (HUD) proposed rule released Monday announced the intent to reduce key protections afforded to consumers under the Fair Housing Act. {snip}

{snip}

Even before this rule change came to light, the ability of algorithms to limit opportunity has become increasingly clear on the national stage. Presidential hopefuls have unveiled plans to address wealth disparities between white and non-white Americans, as well as between men and women, with particular interest in boosting the disbursement of loans to underserved populations. The truth is that access to loans has long been a critical way for Americans to build wealth, yet too often patterns of loan allocation have reinforced, rather than rectified, patterns of inequality by neglecting underserved populations. Some candidates propose to respond to this challenge with new regulations on non-banks that issue loans and credit scores that incorporate rent and phone bills. While comprehensive, these plans would be greatly enhanced if they took into account the role of artificial intelligence in financial services.

{snip} But it’s now clear that AI can also further income and opportunity disparities. To avoid this fate, both industry and government need to take on two central challenges: (1) managing algorithms’ use of variables that are de facto proxies for protected characteristics; and (2) explaining the “black box” of algorithms to prospective borrowers.

First, financial institutions must be held accountable to ensure that their algorithms do not use factors that correlate closely with race or gender and thus become de facto proxies for those characteristics. {snip} In concept, racist and sexist-based decisions are clearly and unequivocally illicit. The Fair Housing Act and Equal Credit Opportunity Act prohibit lenders from considering race or gender directly in loan decisions. Yet there are several other factors the financial sector may use that, while not explicitly equivalent to race or gender, correlate with those characteristics. Institutions may decide, for example, that unbanked individuals are less creditworthy. {snip}

Financial institutions must be encouraged, if not required, to interrogate each and every factor used to make credit decisions by an algorithm to ensure none is a proxy for a protected characteristic. They should not be protected from liability behind the shield of a third party if we intend to root out unjust, biased determinations on who should benefit from opportunities such as home ownership.

{snip} Even if algorithms used by an institution are devoid of explicit bias, institutions must be able to accurately describe how decisions were reached to ensure they aren’t rooted in undetected, illegal bias. Institutions should therefore be required to ensure their denial of benefit notices meet standards for explainability. Specifically, they should be required to list the individual factors used to make a decision and, if applicable, the one or several factors which were determinative. Both Bank of America and Capital One have committed themselves to improving explainability. More banks should be encouraged to follow their lead whether by regulation, legislation or public pressure.

{snip}

Congress should step in to take three steps to bolster the legal avenues available to prospective borrowers. First, it should consider new legislation to clarify the scope of what can be considered a “legitimate business need.” Second, to safeguard against weakening support by the court and the executive cranch, it should enshrine into law that the doctrines of both disparate treatment and disparate impact can be invoked by prospective borrowers against banks that use algorithms, without providing the false shield for them to hide behind a third party vendor or other institution. Third, Congress should create a standard for explainability to cover denial of credit decisions. The opacity of AI must not render such denial notices as incomprehensible as GDPR cookie notices and acceptance pop-ups or worse — end up as a shield for bias against historically targeted, protected classes.

As forums of discrimination shift from the physical to the digital, both governments and businesses must adapt. {snip}