Let me begin with an admission which is also an implied tribute. The issues arising from the use of algorithms in government decision-making had passed me by until a Legal Services Corporation conference last January. There, Professor Virginia Eubanks gave a presentation squeezed among others in a session on her book ‘Automating inequality’. Since then, you could not really be ignorant of a flood of activity that includes a paper from our domestic Law Society. There are official initiatives like the ad hoc committee on AI and criminal justice of the Council of Europe which met for the first time this week. UK Supreme Court justices have weighed in on the issues – with Lord Sales giving a key lecture last seek. And, we also have what I think is the best treatment that I have read of AI in criminal justice from a team at the University of Montreal under Dr Benoit Dupont and commissioned by the Korean Bar Association – an interesting international relationship.
I like this report for a number of reasons – though I should stress I am no expert in the field and may be biased. First, it is very clearly written. For example, it begins with the descriptive chapter on AI that you would expect. I like its approach to looking at ‘general’ AI – from which we are some distance and for which it cites ‘the Wozniak Coffee test – can a machine go into an unknown house and make a cup of coffee?’ Answer No. And not likely anytime soon. Right now, we are concerned more with ‘narrow’ AI limited to particular circumstances. I like its basic description of the move from rule-based AI to machine (‘Instead of trying to encode his knowledge into the system, the programmer will show the algorithm a number of examples and a label for the data. The machine will then itself figure out what these examples have in common). And then on to deep learning (‘Instead of manually extracting features from the data, the engineer can feed the data directly to the Deep Learning algorithm, which will automatically find the relevant features.’) Thereby hangs the tale for its application in criminal justice: the algorithm, the data and the answer all not unproblematic.
Second, the authors really know their field so they have a command of the practical. They include a chapter on how criminals can use AI so that it becomes a direct element in crime itself. ‘Like many technological developments, AI is characterized by its dual use – it has applications both for socially beneficial and malicious ends. It can be used to make crimes more efficient.’measure. The public needs to be made aware of the fact that many of the old assumptions may no longer hold up. For example, videos might be faked, and emails and phone calls asking them for their information could be generated by machines to separate them from their money. Depending on the nature and extent of forthcoming attacks, this could be a painful adjustment period for many people.’
Third, the report is good on analysis. For example, it points to an important distinction in technology used in law enforcement – ‘facial recognition, as a method of identification, is not like DNA, which sits proudly on a robust statistical platform.’ The science of facial recognition is much less secure and can more easily be fooled. And there is an inherent problem for developments like AI based crime prediction – ‘The use of AI by law enforcement may engender confirmation bias of police officers looking for crime, which may in fact alter crime rates … police will detect more crime when they are in a certain place than they would have done otherwise. In other words, if an equal amount of crime is happening in two places, the police will detect more crime in the place they were, rather than in the other place, where they were not.’
Fourth, the report is good on empirical detail and historical context. The move to AI prediction of potential bail breaches in the US is placed within its particular background. The infamous COMPAS system predicting re-offending (which the US Supreme Court refused to review) is explained as an unfortunate combination of the widespread privatisation of criminal justice in the US and a drive, inherently progressive, to move bail decisions away from the evident bias deriving from which detainees can afford to put up a bond.
Fifth, the writers really know the detail. So, for example, there is the most comprehensive list of AI software used in criminal justice that I have seen. There is detail on various ‘public safety assessments’. We get a screenshot of one of the questions which contaminates the objectivity of COMPAS – ‘Based on the screener’s observations, is this person a suspected or admitted gang member?’ Welcome, every prejudice screeners may have about race and youth. The report ends with a comparison of five attempts to provide an ethical framework for criminal justice AI from Japan, the U.K. (House of Lords), international attempts, France and Canada’s own Montreal Declaration for a Responsible Development of AI.
And let’s end with the report’s writers in full academic mode: ‘A study published by scholars Julia Dressel and Hany Farid in January 2018 sought to assess the accuracy of COMPAS, and in so doing demonstrated that the software is actually accurate an average of 65% of the time. This study also demonstrated that the recidivism predictions of COMPAS were no more accurate than predictions made by people with little or no criminal justice expertise or simple statistical analysis based on two features.’ Oh dear.
Image by Dominique Nancy from Pixabay