It is difficult, here in London, to concentrate on anything that is not the Brexit slow car crash. We need to be reminded that there are other major challenges. The sainted David Attenborough is doing his best to get climate change on the Davos agenda. And, back here in the territory of legal services and access to justice, we need to talk about the potential impact of artificial intelligence.
I know. The problem is that artificial intelligence has been overhyped; oversold and over here for rather too long in relation to access to justice without doing much. Lawyers in large commercial firms are happily cavorting with shoals of tech start ups in a frenzy of feeding activity. That seems to be giving results. The Times reported today that due diligence checks in merger and acquisitions had been reduced by more than a half through the use of ‘robots’. But, down among what are sometimes disparaging called the ‘bottom feeders’ – low-price, high-volume practices, both commercial and not for profit – providers are rather more cautious with their dollar/pound/euro. So, AI – despite its high media profile – has had relatively little actual impact in access to justice provision. It is easy to find cynical and hard pressed practitioners disparaging its possibilities as a deceptive mirage.
There are occasional practical forays into the field. A number of the grants given by the Legal Services Corporation Technical Innovations Grants programme refer to AI. And, one of the best of the sessions at its recent conference covered the subject. Over here, the Solicitors Regulation Authority has hailed the role of technology, particularly in automating the ‘back offices’ even of small firms. The SRA is now moving on specifically to examine AI. Paul Philip, its chief executive, said, ‘“Our report highlights the potential for technology to add further value in the workplace and we are looking further at how AI can enable the provision of high-quality legal services through the government Pioneer Fund award. Many firms are already exploring the possibilities and I would urge all law firms to consider how technology can help you and your business.”
Let us begin with some definition of what we are talking about. AI is famously hard to pin down and is generally seen as a collection of different technologies. These might be best organised by function.
There is an opening cluster of technologies around communication – including natural language processing, speech and visual recognition – that facilitate the interaction between humans and computers, allowing users to get responses to sometimes complex questions put in ordinary and simple language. These replies may be both verbal and visual. Look at the website of New Zealand Soul Machines for a spookily interacting chatbot which can smile, blink and even lift an inquiring eyebrow.
There is a second group of functions around statistical correlation and prediction. Feed in enough data and you get correlations. These are statistical – ‘on the information you have given me, you have 83 per chance of lung cancer and 17 of a broken toe’. AI can do some pretty humdrum tasks – for example, it can help practices to predict costs, lengths of case and outcomes. It should be noted that there are a number of points inherent to AI’s statistical methodology. Answers can only relate to the data that has been given to the machine: AI will not predict a black swan. AI will also make less accurate predictions in factual situations which are rarer and on which it has less past data. And, used in any legal situation, AI decisions should be explicable by a human whether they are made to (or by) a court or tribunal or to a person seeking advice and information. Indeed, this may be an ethical requirement for all AI-derived advice or decision which affects life or liberty eg automated driving.
Third, AI incorporates machine learning – processes by which the machine can be taught to learn from its own previous decisions. This is what distinguishes it from a guided pathway that exists before or after a user follows it. An AI-self driving programme will adapt to its experience of previous situations, changing those pathways.
Fourth, AI can make decisions and initiate action on the basis of its statistical correlations. That could happen as you are vaporised by an automated drone attack in the mountains of the Hindu Kush. Or, alternatively, it could lead to the classic ‘the computer says “No”’ which pervades much social security and other government decision-making. More positively, it could help users to complete the forms that underlie much interaction with government and commerce.
There is some discussion of the use of AI in one or all of these forms in the delivery of access to justice but little actually to see – at least as yet. Indeed, in a reversal that would seem significant, the experiment in Australia with Soul Machine’s creation of Nadia – a visual chatbot with the voice of Cate Blanchett to answer questions on new disability provision – came to an abrupt stop when the government pulled the plug as costs escalated and it became clear that IBM’s much hyped Watson programme was not powerful enough at that time.
Overall, organisations are still largely seeking to improve ‘linear’ advice and information provision. The Citizens Advice Service in England and Wales provides a good paradigm. It is the process of re-orientating itself towards having a digital initial presence to which its nationwide offices are backup. Its website gets better and better. But its website is still, in essence, a linear screed which sets out information in the classic way. Steps beyond this are few and stuttering. One or two sites like MyLawBC.com are using guided pathways or interactive self completion document assembly tools like those provided by A2J author in the US or the various disability application/review/appeal letters or CourtNav in England and Wales. These are interactive but not, as yet, really using AI. Most self-completing form processes are actually quite simple.
The Dutch Rechtwijzer still looks the most sophisticated of the interactive pre-AI sites. It developed the idea of guided pathways and it intercut these with nuggets of information which assisted in decision-making as users progressed up the decision-tree as well as allowing online intervention by third parties such as mediators. And it offers perhaps the best glimpse of how an AI system might integrate technology with improved assistance. It could advise you on the predicted outcome of any decision you were about to make.
So, you can begin to see how AI driven provision might work in outward facing provision. (Let’s leave for the moment its potential impact in the back office). The issue arises of how we might get to the point that institutions are sufficiently encouraged to builds products that test the possibilities.
Some may argue that the way forward is through chatbots. Indeed, claims are made that chatbots are on the threshold of using AI or actually doing so. Unfortunately, most existing chatbots, despite these assertions, are actually pretty crude and amount to little more than the guided pathways with more or less facility to handle requests in ordinary language. Chatbots certainly embody the interactivity required for the next step forward. However, chatbots, like AI, suffer heavily from over-promise and hype. They may or may not provide a transitional phase toward full AI.
Thus, the potential for AI seems clear as a general proposition across jurisdictions though its potential point of entry may change. The critical issue is who or what is going to come up with the necessary funding – something that might get easier as the general use of AI gets wider. For the moment, the lessons on the way forward seem somewhat basic – perhaps even a bit trite :
- It is pretty evident that in the giving of information and advice; the making of referrals and the provision of self-help that AI can play a major role in promoting access to justice – though this will be limited by the digital capacities of, on the one hand, users and, on the other, the data provided to the system. AI is unlikely to work if seen as a standalone, fire-and-forget system. It will be a way of levering human resources.
- Each jurisdiction will throw up different opportunities to explore the possibilities depending on the funding, implementation and organisation of access to justice provision;
- At this point in time, there is a particular need to share international experience and discussion.
Online food delivery tech start up, Munchery, announced on Monday that it was closing – writing off capital investment of $125m on a business once valued at $300m. This type of loss is hardly welcome to the venture capitalists who backed it but probably not fatal to them. But, no legal aid authority or legal services business could afford this kind of failure. So, co-operation is forced on our sector if we wish to see progress. Let’s see what we can do.
The origin of this post is an attempt next week to bring together a group of people around the world – from Canada to Australia – to discuss this topic as the result of a particular UK stimulus. This amounts to some of my working notes for the event. However, if this global discussion works, then it could serve as a pilot. It might be repeated with a wider audience or, indeed, a different topic. If you would be interested in participating in any follow-up then please do get in touch: rsmith@rogersmith.info. Likewise, email in if you would like to contribute specifically to the debate on AI.