Automated Decision-Making, Predictive Algorithms and a Coda.

Sod’s Law. You write a piece about automated decision-making and its potential relevance to the advice sector. You use what appear to be the best current examples – from Sweden, Australia and an EU study. From the UK, you report only a denial of any use of predictive algorithms from the Department of Work and Pensions and a thinnish Commons Select Committee Report. Then, on the day you publish, Sky News (of all places) runs a story with the headline ‘Thousands face incorrect benefit cuts from automated fraud detector’.

The story, filed by Sky’s technology correspondent Rowland Manthorpe, reports the work of the London Counter Fraud Hub: ‘Using vast quantities of data from millions of households, it is designed to target potential fraud cases involving the single person council tax discount, subletting in local authority housing and business rate relief and rating.’ The process has been trialled in four London boroughs – Camden, Islington, Ealing and Croydon – and been found to be ’80 per cent effective’ in identifying fraud, described as ‘an acceptable benchmark’.  Someone estimates that the system could save London’s 33 local authorities an annual sum of £15m.

The London Counter Fraud Hub has a background that indicates the web of connections behind this initiative. At its heart is a product designed for the insurance industry to combat insurance fraud. The system is named Canatics which stands for Canadian National Insurance Crime Services. In this country, it is run by the Charter Institute of Public Finance and Accountancy – its website has a handy youtube video on how it all works in insurance. Partners in its construction include BAE systems.  Information is sent by insurance to Canatics. If Canatics’ algorithms suggest fraud then the insurance company concerned is notified and can take action as is appropriate. There could be no problem with that. And, indeed, the use of data analytics to identify fraud seems highly desirable.

And there would, presumably, be little problem if this way of working were followed in relation to benefits. Councils feed in data; Canatics identifies cases for further investigation; councils investigate and take appropriate action. However, there are suggestions in the Sky article that councils may threaten action solely on the basis of a prediction – which is known to have a 20 per cent failure rate. You can see the temptation for a hard pressed council: make the accusation and leave the accused to respond. It would probably be given a name like ‘shaking the tree’. Ealing, however, say that there is nothing to see here: ‘”We will not cancel anyone’s council tax discount without giving them a fair say, which is why we are writing to them first.”’

The system has been criticised by data experts. Sky reported, ‘Joanna Redden, co-director of Cardiff University’s Data Justice Lab said: “When automating a system like this, when you know some people are going to be wrongly identified as committing fraud, and that many will have few means or resources there are serious concerns that need to be addressed. I would urge the councils who are considering automating this process not to do so, particularly given what we know about how this kind of system can go wrong.’

Canatics was constructed with some care by the insurance industry where members would want to be careful with their reputations. Hard-pressed councils are going to be tempted to shoot first and ask questions afterwards. But decisions, effectively of a quasi-judicial nature, should be taken by identiable and accountable individuals. And, helpful though technology can be, it should never be the computer that ‘says no’. It should be an individual exercising judgement. And advice agencies the length of the land need to be watchful of attempts to cut corners by automated processes.

Leave a Reply