Automated Decision-Making and its lessons for the advice sector

The role of automated decision-making in the justice system has attracted some attention – much of it centred around algorithms that advise judges on decisions about bail. But automated decision-making is expanding throughout the public realm and issues for advisers are beginning to tumble out. What happens, for example, when an algorithm is wrong, discriminatory or, for some reason, goes rogue? Professor Virginia Eubanks raised this issue in her contribution at the US Legal Services Corporation’s Technical Innovations conference in January. A Berlin-based group, AlgorithmWatch, has just published an EU study entitled Automating Society: Taking Stock of Automated Decision-Making in the EU.

Just in case you think that this is a pretty academic matter, you might note that around 10 per cent of automated decisions on Swedish unemployment benefit have just been found to be wrong. An automated Australian benefit debt collection service, generally known as Robo-debt, has proved highly controversial and Victoria Legal Aid is funding a legal challenge. The problem has been an oversimplification in the algorithm used to collect facts: ‘Rowan McRae, VLA executive director of civil justice, told Pro Bono Australia  ‘…“The way robo-debt averages people’s income assumes that they work neat, regular hours throughout a year. In reality, we know people work part time or sporadically throughout the year, because they’re studying, can’t get regular work, have multiple jobs or are unwell. This means the calculation of alleged ‘overpayments’ is often inaccurate.”’

At present, the Department of Work and Pensions declares that, at least for the present, it is not using automated decision-making. Its Personal Information Charter declares: ‘The decisions DWP makes that have a substantial effect on you – for example whether or not you are entitled to a benefit – are made with meaningful input from staff. Review or appeal options are built in to all DWP benefit processes, even where this is not specifically required by data protection laws. DWP is developing new digital services all the time. If any new services involve automated decision-making, we will tell you about this when the decision is made.’ Assuming that even a benighted post-Brexit Britain will maintain Europe’s rules on general data regulation, these should provide some level of protection against totally automated decision-making. And the data-mining antics of such as Google and Facebook are giving the issue a high political profile.

Professor Eubanks has written Automating Inequality on how algorithms can operate consciously or unconsciously to disadvantage the poor and has recorded various videos further to make her case. Algorithmwatch has just provided a European review based on studies in 12 countries, including the UK. This opens with a commercial illustration of why an essentially good development (cheaper, faster, more consistent decision-making) may have a dark side: ‘When in 1957 IBM started to develop the Semi-Automated Business Research Environment (SABRE) as a system for booking seats on American Airlines’ fleet of planes, we can safely assume that the key goal of the airline was to make the very cumbersome and error-prone manual reservation process of the times more effective for the company and more convenient for the customers. However, 26 years later, the system was used for very different purposes. In a 1983 hearing of the US Congress, Robert L. Crandall, president of American Airlines, was confronted with allegations of abusing the system—by then utilised by many more airlines—to manipulate the reservation process in order to favour American Airlines’ flights over those of its competitors. His answer: “The preferential display of our flights, and the corresponding increase in our market share, is the competitive raison d’etre for having created the system in the first place”’.

Automated decision-making is really a way of describing one of the key attributes of artificial intelligence. ‘Algorithmically controlled, automated decision-making or decision support systems are procedures in which decisions are initially—partially or completely—delegated to another person or corporate entity, who then in turn use automatically executed decision-making models to perform an action. This delegation—not of the decision itself, but of the execution—to a data-driven, algorithmically controlled system, is what needs our attention. In comparison, Artificial Intelligence is a fuzzily defined term that encompasses a wide range of controversial ideas and therefore is not very useful to address the issues at hand.’ To be fair, the term AI would be used not only to cover determination, implementation and advice but also communication with users through visual, verbal or other methods. But it is certainly true that the core concern is about automated decision-making.

One objective of the Algorithmwatch report is to increase the level of awareness of the issues involved. This has already sparked some consideration in the UK. The topic was the subject of a report by the House of Commons Science and Technology Committee published in May 2018. However, AlgorithmWatch was not impressed: ‘The report identified many problem areas, but was rarely specific in advocating solutions. Instead, it mostly called on existing or forthcoming regulatory bodies to carry out further research.’

Automated decision-making may not yet be deployed by the Department of Work and Pensions but it is in relation to personalised budgets for social care which are the responsibility of local councils. ‘It is not known exactly how many people have had their personal budgets decided with the help of ADM. However, one private company, Imosphere, provides systems to help decide personal budgets for many town halls and National Health Service (NHS) regions. Its website says that around forty town halls (local authorities) and fifteen NHS areas (Clinical Commissioning Groups, which also have the power to allocate personal budgets) across England currently use the system, and that it has allocated personal budgets worth a total of over £5.5bn.’ Imosphere promise on their website to ‘Calculate a personal budget, a personal health budget, or an integrated budget across: Adult social care, Children’s services, covering social care, education and health, Continuing healthcare and long-term conditions.’ Inevitably, at a time of austerity, claimants have found their budgets cut and controversial decisions have been made. 

So, what is to be done? The first objective of the EU research was to raise awareness of the issue and to chart the extent of automated decision-making, both present and prospective. Interestingly, it warns against too much concern with the more remote implications of AI: ‘The debate around AI should be confined to current or imminent developments.’ In other words, don’t spend too much time worrying about Hal’s futuristic, ‘I am sorry, Dave, I can’t do that’.

The second set of objectives relate to strengthening civil society involvement in implementation and creating adequate oversight: ‘It is doubtful … that many of the oversight bodies in place have the expertise to analyse and probe modern automated decision-making systems and their underlying models for risk of bias, undue discrimination, and the like.’ That would be helped by involving ‘a wide range of stakeholders in the development of criteria for good design processes and audits, including civil liberty organisations.’

Third, if we take the specific case of public decisions taken in relation to money, services and status, we can probably go further. A battle has raged since the development of the welfare state after the Second World War between those who saw benefits as legal entitlement on transparent and enforceable terms and those who saw them as discretionary – or even conditional. In the 1980s, under the conservative Thatcher government in the UK, it looked like the notion of welfare rights won and the present structure of appeals and regulations was established. This has been successively attacked – substantively by the replacement of single payments on defined terms by discretionary assistance and then procedurally by the removal of direct and immediate appeal rights. As benefit cases are transferred to the online tribunals, we need to be careful that the principle of accountable, individual decision-making by independent adjudicators is maintained. Indeed, it should be retro-fitted to the initial decision-making process. Automated systems may advise but not decide.

Fourth, whatever supervisory organisation is put in place, it should have a funding capacity similar to the Equality and Human Rights Committee and be capable of resourcing class litigation of the kind contemplated in Victoria. 

Fifth, judgements about degrees of disability are likely to prove one of the most difficult and contentious areas of decision-making. As the pressures mount to make this an area of decisions by numbers, claimants need the best advice on how to put their case. The Department of  Work and Pensions should fund developments like those by Lexis Nexis, Se-Ap and AdviceNow to show claimants how to draft their cases properly and how they need to match these against legal requirements. Having developed good, objective systems, they should then implement them themselves.

Finally, the inherent tensions of benefit systems in an age of government austerity need to be explicitly recognised. We need more organisations like Mencap able to make available web-based resources for individual and systemic challenge. And we need to grapple with technology within the advice sector not only as a way of leveraging our own resources but a potentially raising a new set of problems on which people will need advice.

Leave a Reply