Algorithmic Justice and the UN rapporteur on extreme poverty: technology no substitute for justice

Philip Alston is an Aussie academic with a proven capacity to get up the noses of rightwing administrations in the United States and the United Kingdom. He does this currently as the UN’s special rapporteur on extreme poverty and human rights. President Trump, he said, was turning the US into the ‘world champion of extreme inequality’. The UK, he reported, ‘is in a state of denial about the impact of austerity policies on the poor’. Professor Alston has now published a report on the ‘digital welfare state’ which is destined to be a touchstone for all those concerned with the digitalisation of welfare benefit adjudication. If you are interested in this subject, you must read this. It is quite short – only 19 pages and 79 paragraphs.

Professor Alston is a heavyweight academic currently co-chair of New York University’s Center for Human Rights and Global Justice. He is the editor of a 900 page book on The EU and Human Rights. Several of his other publications have been similarly weighty – both literally and figuratively. This report is in a different style and tends more to the polemic, as its author can see: ‘It will reasonably be objected that this report is unbalanced, or one-sided, because the dominant focus is on the risks rather than on the many advantages potentially flowing from the digital welfare state. The justification is simple. There are a great many cheerleaders extolling the benefits, but all too few counselling sober reflection on the downsides.’ 

As a consequence, the report is at its strongest in terms of its main arguments. We know from his published oeuvre that Professor Alston can do detailed, referenced analysis. But here we get the main points and unreferenced quotes and observations. You just have to accept that this is a consequence of a deliberately chosen form. If you want more detail, there is little doubt that Professor can supply it elsewhere.

And Professor Alston’s summarises his main point: ‘systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish. This report acknowledges the irresistible attractions for governments to move in this direction, but warns that there is a grave risk of stumbling zombie-like into a digital welfare dystopia. It argues that Big Tech operates in an almost human rights free-zone, and that this is especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state. The report recommends that instead of obsessing about fraud, cost savings, sanctions, and market-driven definitions of efficiency, the starting point should be on how welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged.’

Digitalisation of the welfare state is, he argues, a con. It is presented as ‘altruistic and noble’ but it often runs alongside deep cuts in services and provision with more and more intrusive monitoring of, and sanctions on, recipients of welfare benefits. And, though he does not make this extension, his remarks about digitalisation of state activity apply equally to courts and tribunals as well as government departments. Much of the initiative for change has come from massive tech companies which have little or no understanding of the demands of human rights. The UK’s Universal Credit is very much a paradigm of what he is talking about. But other countries supply their own examples: ‘In 2015 the Ontario Auditor-General reported on 1,132 cases of errors with eligibility determinations and payment amounts under SAMS [Social Assistance Management System] involving about $140 million. The total expenditure on SAMS by late 2015 was $290 million. The new system reportedly led caseworkers to resort to subterfuges to ensure that beneficiaries were fairly treated, made decisions very difficult to understand, and created significant additional work for staff.’

He is concerned that ‘communications that previously took place in person, by phone or by letter are increasingly being replaced by online applications and interactions. Various submissions to the Special Rapporteur cited problems with the Universal Credit system in the United Kingdom, including difficulties linked to a lack of internet access and/or digital skills, and the extent to which online portals can create confusion and obfuscate legal decisions, thereby undermining the right of claimants to understand and appeal decisions affecting their social rights. Similar issues were also raised in relation to other countries including Australia, and Greece.’ This is an observation that would apply equally to communication with digitalised courts and tribunals. It reflects the alienation depicted, for example, in Ken Loach’s I Daniel Blake where the anonymous ‘decision-maker’ makes apparently unappealable, Kafka-esque decisions.

In making his argument, Professor Alston clearly enjoys the irony that a major ally is none other than Prime Minister Johnston: ‘In addressing the General Assembly on 24 September 2019 the Prime Minister of the United Kingdom warned of the dangers of the digital age, singling out: 

  1. the risk of ‘round-the- clock surveillance’; 
  2. the perils of algorithmic decision-making; 
  3. the difficulty of appealing against computer-generated determinations; and 
  4. the inability to plead extenuating circumstances when the decision-maker is an algorithm. He concluded rather ominously by suggesting that “[d]igital authoritarianism is … an emerging reality.”’

Boris Johnson concluded his speech by saying: ‘Above all, we need to agree a common set of global principles to shape the norms and standards that will guide the development of emerging technology.’ The speech was not universally approved. Business Insider called it ‘deeply bizarre’ at points. 

Professor Alston ends his report with a plea to take seriously the ‘dangers for human rights of various manifestations of digital technology and especially artificial intelligence’. He wants such discussion as there is to move beyond the traditional but limited civil and political rights such as privacy, non-discrimination and fair trial to the ‘full array of threats represented by the emergence of the digital welfare state’: ‘if the logic of the market is consistently permitted to prevail it inevitably disregards human rights considerations and imposes “externalities on society, for example when AI systems engage in bias and discrimination … and increasingly reduce human autonomy’. 

Algorithmic justice carries all the perils of artificial intelligence based on humanly collected data: ‘predictive analytics, algorithms and other forms of AI are highly likely to reproduce and exacerbate biases reflected in existing data and policies. In-built forms of discrimination can fatally undermine the right to social protection for key groups and individuals. There … needs to be a concerted effort to identify and counteract such biases in designing the digital welfare state. This in turn requires transparency, and broad-based inputs into policy-making processes. The public, and especially those directly affected by the welfare system, need to be able to understand and evaluate the policies that are buried deep within the algorithms.’ 

Digitalisation of tribunals, about to be implemented in the UK, will extend the issues of the management of the digital welfare state into the courts structure. To the claimant, the digital process will eventually become seamless as they pass from department to tribunal to court. Domestically, researchers like Joe Tomlinson at the Public Law Project are beginning to grapple with the consequent issues. In the States, writers like Virginia Eubanks, who published Automating Inequality last year, are exploring the same territoryt. We should be grateful for Professor Alston in giving the subject a boost. As the New York Times said in reviewing Pofessor Eubanks book, ‘Everyone needs to understand that technology is no substitute for justice.’

Leave a Reply