Computers that say No

Two reports in UK papers over the last week – on the defects of facial recognition technology in the Guardian and on automated decision-making in immigration cases in the Financial Times – keep alive the issues around algorithmic justice for those working in legal services. They need not only to know about the impact of technology on their own work (the subject of most of these posts). Simultaneously, they need to monitor (and be ready to counter) how technology may unfairly impact on decisions affecting their clients (the subject of this particular post).

The problems of algorithmic decision-making are now pretty well acknowledged. The UK Government-established Centre for Data Ethics and Innovation produced an interim review which was last updated In July 2019. It reported: ‘The use of algorithms has the potential to improve the quality of decision- making by increasing the speed and accuracy with which decisions are made. If designed well, they can reduce human bias in decision-making processes. However, as the volume and variety of data used to inform decisions increases, and the algorithms used to interpret the data become more complex, concerns are growing that without proper oversight, algorithms risk entrenching and potentially worsening bias.’ It also published a ‘Landscape Summary” of the problems of bias by three academics. 

There is an increasing torrent of writing around the world on the topic of potential bias in government decision-making. As examples, this website has previously reviewed reports from the University of Montreal, a speech by the UK Supreme Court Judge Lord Sales, the views of the UN Rapporteur on Extreme Poverty pithily articulated as ‘technology no substitute for justice’ and a recent guide to practical implementation.

There is increasing legal engagement with the problems in criminal justice – most notoriously in Loomis v Wisconsin where the US Supreme Court declined to involve itself on the use of secret algorithms used to influence decisions on bail and parole. Another area of concern is in the area of employment discrimination where hiring decisions are being rapidly automated . As an article by Dr Ifeoma Ajunwa explained in the New York Times: ‘The problem is that automated hiring can create a closed-loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After the résumés have undergone automated culling, a lucky few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability built in to check that the criteria are fair to all job applicants.’ 

Facebook obliged with a textbook example of how all this can work in practice: ‘a 2016 class-action lawsuit alleged that Facebook Business tools “enable and encourage discrimination by excluding African-Americans, Latinos and Asian-Americans but not white Americans from receiving advertisements for relevant opportunities.” Facebook’s former Lookalike Audiences feature allowed employers to choose only Facebook users demographically identical to their existing workers to see job advertisements, thus replicating racial or gender disparities at their companies. In March [2019], Facebook agreed to make changes to its ad platform to settle the lawsuit.

The Guardian story provided yet another illustration of how facial recognition can be insensitive to racial and cultural factors in ‘reading emotions’. “AI is largely being trained on the assumption that everyone expresses emotion in the same way,” [Professor Barrett] said. “There’s very powerful technology being used to answer very simplistic questions.” For a playful exposition of the problems of the ‘coded gaze’ watch the TED talk by Joy Buolamwini (a woman of colour who dubs herself a ‘poet of code’) who found she needed a white mask to be recognised by a computer camera. She has created an organisation to pursue her argument, the Algorithmic Justice League.

The Financial Times story is hidden behind a paywall. There is a nefarious way of negotiating this. But, actually, you don’t need to. The essence is that a UK immigration scheme relating to EU settled status is biased against women. That is because its algorithms search for proof of residence through a partial list of tax and social security records which do not include benefits most likely to be received bye women eg working or child tax credits and other child benefits. This is not much in the story that is actually new and the point is contained in a report published by the Public Law Project by Joe Tomlinson last year. It was written up here last September. In this case, the problems are compounded by the failure of the Government to provide an adequate appeal system and reliance on internal review instead.

These stories are going to keep coming as the temptation to cut corners and costs in the implementation of artificial intelligence to decision-making grows. We have to hope that the forthcoming full report of the Centre for Data Ethics and Innovation will set some high standards that government and businesses will keep. But don’t hold your breath and, in the meantime, be ready to argue against computers that say no with apparent authority but little reason – particularly where they involve decisions about people who are not – like me – white men.

Leave a Reply