Posted on September 26, 2018

Researchers Raise Alarm over Use of Artificial Intelligence in Immigration and Refugee Decision-Making

Nicholas Keung, The Star (Toronto), September 26, 2018

Wanted: A contractor to help immigration officials use algorithms and data-mining to assess the personal risks of sending a failed refugee claimant home and to calculate if a migrant is well-established enough to stay in Canada on humanitarian grounds.

This is not a fictional ad but a tender notice recently issued by Ottawa to explore the potential use of artificial intelligence and data analytics in Canada’s immigration and refugee system.

According to a new University of Toronto study, the job ad is just the latest example of government replacing human decision-making with machines — a trend it says is creating “a laboratory for high-risk experiments” that threaten migrants’ human rights and should alarm all Canadians.

“The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues,” warned the 88-page report being released Wednesday by U of T’s International Human Rights Program and Citizen Lab.

“These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Concerned about the human impact of automated systems, researchers dug into public records such as public statements, policies and media reports of the federal government’s adoption of the technologies in the immigration system. More than 30 experts were consulted, including computer scientists, technologists, lawyers, advocates and academics from Canada, the U.S, Hong Kong, South Korea, Australia and Brazil.

Researchers have also submitted 27 separate access to information requests to eight different government departments and agencies, but have yet to receive any data or response.

Study co-author Petra Molnar said Canada has used automated decision-making tools since at least 2014 to “triage” immigration and visa applications into simple cases and complex ones that require further review by officers due to red flags the machines were trained to look for.

“Algorithms are by no means neutral or objective. It’s a set of instructions and recipes based on previous data analyses that you use to teach the machine to make a decision. It doesn’t think or understand the decision it makes. It’s a rubber-stamping process,” explained Molnar, a research associate with the International Human Rights Program.

“Biases of the individuals designing an automated system or selecting the data that trains it can result in discriminatory outcomes that are difficult to challenge because they are opaque.”

The study, for instance, looked at an algorithm used in some U.S. courts to assess the risk of reoffending when ordering pretrial detention and found racialized and vulnerable people were more likely to be held behind bars than white offenders.

Cynthia Khoo of the Citizen Lab said the increasing use of artificial intelligence or algorithmic decision-making tools speaks to the broader trend of “technosolutionism” that assumes technology is a panacea to human frailties and errors.

“The problem, however, is that technology — which is made and designed by humans and trained on human decisions and human-produced data — comes with these exact same problems, but wrapped up in a more scientific-looking box, and with the additional problem that not everyone realizes those problems remain,” said Khoo.

The report recommends Ottawa establish an independent, arms-length oversight body to review all uses of automated decision systems by the federal government. It also wants Ottawa to publish all its current and future uses of artificial intelligence and create a task force to better understand the current and prospective impacts of these technologies.