Nesta (a UK organisation whose function is opaque unless you know that the initials stand for its former name, the National Endowment for Science, Technology and the Arts) has published a valuable guide to help public sector organisations make the most of algorithmic decision-making. The value of this paper is that, unlike most of the other papers signposted in a recent House of Lords Library briefing on the topic, it is concerned with the practicalities of implementation. Papers from the European Parliament and the government’s response to a Home Affairs committee report in 2018 are more high level. The Nesta report focuses on the human-machine interaction critical to proper implementation of AI-based systems: ‘How people are working with tools is significant because, simply put, for predictive analytics tools to be effective, frontline practitioners need to use them well.’
The author, Thea Snow, a senior programme manager for Nesta, invents a term, ‘artificing’, to identify the ‘optimal way for frontline practitioners to use predictive analytics tools’. This combines ‘the tool’s insight with their own professional intuition.’ Personally, I do not like the term: it is too close to ‘artificial’ and it broadly denotes something quite the opposite: the application of common sense and professional judgement. You have, however, to like the concept and it marries with one of the emerging lessons from the use of technology more generally in the justice field – and perhaps more widely. The best solutions which technology offers are often not in the form of ‘fire and forget’: they require a working interaction between people and machines.
Let us go back to some other definitions. Predictive analysis ‘refers to the application of machine learning algorithm to mine data, create models and analyse existing data to discover patterns and make predictions’. Wherever you live in the world, predictive analysis is coming to you embedded in a range of public functions ranging from healthcare, traffic control, fire forecasting, risk modelling for the care of children, the potential outbreak of crime and risk modelling in the courts, most celebratedly in the COMPAS tool used in the United States for bail and sentencing decisions.
Predictive analytics has undoubted benefits in terms of efficiency and for spotting unexpected correlations. Its defects are, however, increasingly clear: it is
-
- Not good at predicting rare events.
- Often trained on incomplete data.
- Often trained on biased data, resulting in discriminatory tools.
All this is now pretty well known and accepted, at least in theory. What is interesting is how practitioners are responding to these insights.
Ms Snow’s interviews with practitioners revealed that:
‘More than one-third of practitioners were ignoring the [prediction] tool. This is known as algorithm aversion. Despite there being a fear in the media and commentary that practitioners will defer to the advice of tool (known as automation bias), this was very uncommon. A key reason for this appeared to be that deference was being explicitly and very strongly discouraged in tool training. However, a number of practitioners expressed fears that deference will likely become an issue as the tool becomes more embedded in practice and the novelty (and therefore caution) of using the tool wears off.’
‘Many practitioners draw on both professional judgment and unconscious bias to inform the intuitive element of their decision-making. This is significant because it challenges claims that algorithmic tools will bring “scientific order and consistency to… decision- making practice”, and highlights that bias endures as a feature of human decision- making, despite the introduction of predictive analytics tools.’
Ms Snow then proceeds to look at how you could support a ‘productive human-machine interaction’. And, for a lawyer, this breaks down into a comforting range of numbered points. There are three core principles’
‘Context
Introducing the tool with awareness and sensitivity to the broader context in which practitioners are operating increases the chances that the tool will be embraced by practitioners.
Understanding
Building understanding of the tool means practitioners are more likely to incorporate its advice into their decision-making.
Agency
Introducing the tool in a way that respects and preserves practitioners’ agency encourages [what she calls] artificing.’
She divides each of these three principles into a number of checklists. I like the practicality of some of her points – ‘make the tool as frictionless as possible’, avoiding use of separate screen; ‘ spend time thinking about the programmes and processes that are in place to support those identified by the tool as being at risk – frontline workers need to know they have the resources and tools available to them to effectively respond to whatever risk is flagged’; ‘design the tool in a way that offers practitioners an option to view data sources and data currency’; and, I would have thought importantly, management should show its ownership of new ways of working (good advice anyway) ‘the tool’s value proposition must be communicated by managers and the leadership team, not an external consultant or tool developer’.
The final lesson is that deference to the tool is to be discouraged but not too much. Ms Snow ends with the salutary tase of Stanislav Petrov, the Russian Lieutenant-Colonel who ignored a false machine-generated warning of a Nato missile strike. He was subsequently reprimanded by his bosses for failure to maintain sufficient records, though rewarded after death by headlines like the BBC’s ‘Stanislav Petrov: The man who may have saved the world’. Never mind the human-AI interface, the human-bureaucracy one can be equally troublesome. And Nesta have produced as good a guide as any in how to deal with it.