Artificial Intelligence and Access to Justice: Hitting the Wall

Two days ago, I attended The Artificial Intelligence in Legal Services summit to promote a Law Society report on algorithm use in the criminal justice system. The publication merits its own, later, post. The day, not without its problems, however, raised issues about Artificial Intelligence and Access to Justice.

Indulge me for a minute before we get to the meat of the discussion. I need to open with a structurally irrelevant – but heart-felt – protest against the current vogue for underground conference centres. You get a large space, an oppressive roof and no natural daylight. This is not really acceptable – even if enlivened, as this one was, with a bit of London’s original Roman wall bathed in washed out imperial purple light running along one side. For good measure, I thought the coffee over-stewed as well – but then it is singularly hard for conference venues to get the right caffeine technology. The environment was particularly important because the total lecture hours of the day came in at just under nine – with no afternoon tea break.

But back on track. What were the main points? There was a running issue about definition. The answer seemed almost to be that you know AI when you see it. If you kick it, your foot hurts. The core that most speakers agreed upon was that its distinctive nature involves autonomous prediction or decision-making that incorporates an element of machine learning – processing by the programme itself. Richard Susskind, speaking by video, emphasised how modern AI differs from the rule-based earlier version with which he grew up. The result, potentially with ethical and use-limiting consequences, is that you cannot necessarily explain in logical terms an AI-derived decision: it arises from the application of the algorithm on the data. There were various suggestions – probably now all doomed – about a better term then ‘artificial’ – for example, extended.

This was billed as a discussion on AI in the legal services sector. It  is clear that AI has a differential impact in different areas. As Christina Blacklaws, Law Society president, said AI is having a massive impact on the B2B (business to business) sector; some limited impact on B2C (business to consumer) and virtually none on A2J. We saw a good demonstration of Luminance which provides an AI-based document review separated out into different categories much more quickly than humans could do it. But this is operating in a field where there is a lot of clean data and sufficient funds to merit investment. There were two or three mentions of access to justice in the day but this is not really where the interest lies – certainly not for an audience corralled in the City of London.

For part of the day, I sat next to a professor of Ethics. That reflected a high level issue with AI. There is the well-trodden issue of algorithmic bias explored further in the Law Society paper. There is the need for transparency and for accountability. And the danger that governments at a time of austerity were reaching for AI solutions before considering the ethical issues. At the margin, this involved the question raised by a couple of speakers about whether there are decisions that AI should never be allowed to make. These might include taking life in either a military or health setting or, perhaps more arguably, decisions on guilt and punishment in the criminal justice context. The shadowy boundary between AI’s capacity for prediction and actual decision-making was raised. Richard Susskind reckoned we had a decade to sort out these issue and that, to do so, we need to focus on the concrete evidence derived from experience. We needed to avoid, as Birmingham University’s Professor Sylvie Delacroix put it, the ‘boiling frog’ effect of values being pre-empted by implementation.

The day was long enough to contain some speakers with a degree of scepticism, particularly in the context of access to justice. Edinburgh’s Professor Burkhard Schafer expressed a general criticism of technological solutions to social problems. Canadian lawyer Duncan Card acknowledged that AI might help with high value commoditised legal services but was not going to do away with the need for individual judgement in anything complicated. He made – but did not labour – a related point. You may need individuals as the ultimate backup – he referred to airline pilots as ‘redundant systems’ but did not suggest that they be removed.

My main observation on the day is one which has been well made by others. AI is a sufficiently large topic to merit investigation of the enormous effects that it will have for our economies and politics. But, there is a danger in a field like access to justice of it becoming an answer looking for a question. Governments, like our own, are handing out dollops of money for AI pilots. But you really need to begin by breaking down the elements of access to justice and then interrogating them to see where AI (or other technological trends such as legal design) might be able to improve efficiency, effectiveness and quality.

So, if we take a typology recently publicised in a tweet by Margaret Hagan and reflecting what seems a reasonable division of function as experienced by a seeker of justice, we have a six stage process that moves from:

  • Knowing you have a legal issue;
  • Understanding your service and process options;
  • Making and deploying a strategy;
  • Participating in the justice process;
  • Negotiating, defending, proving, representing;
  • Making sense of outcomes and enforcement them.

What I like about this division is the emphasis that it puts on the non-mechanistic and the consideration of strategy. To get the full range of access to justice functions, I would combine it with something like the following division of functions:

  • Diagnosis and Identifying a problem;
  • Sufficient information to decide how to address it;
  • Referring where appropriate;
  • Self Helping where appropriate.

Having agreed an overall typology, we could then move on to a discussion of how AI and technology might (or might not) be used at each point. This would be most fertilely done on an international basis and through concrete examples that begin to exist. For example, how could AI help you diagnose a legal problem? There is lot of work on that in the US but there is also the example of existing sites like citizensadvice.org.uk which already give preliminary information but which are not interactive. AI has obvious uses in the field of referral which are being widely explored. 

So, my plea from the day would be for a discussion of Access to Justice as a subject in itself – for the achievement of which technology might offer a variety of assistance,  including sexy AI; exciting legal design and more mundane interactivity and guided pathways. Anyone up for a web-based discussion?

Leave a Reply