How will artificial intelligence impact on access to justice?

How will artificial intelligence (AI) impact on the access to justice sector of the legal services market? Clearly, it will have a major effect on the commercial sector of law and the Law Society of England and Wales is but the latest professional body to provide what it calls a ‘horizon scanning’ view of its likely impact. This concluded that AI will have ‘significant implications both for the legal profession and  a number of areas of the law itself’. But, how far are these likely to filter down to the lawyers and advisers in both the NGO sector and legal profession who work in access to justice for people on low incomes? 

The commercial engagement in law-applied AI is sufficiently high for respected legal commentator Richard Tromans  to use military metaphors. In a recent blogpost, he talked of the looming ‘battle’ between LexisNexis which has just launched Lexis Analytics (leveraging its purchase of Lex Machina, Intelligize and Ravel Law); Thomson Reuters with its Westlaw Edge platform; and a variety of other legal research tools such as ROSS Intelligence which has just announced its extension to cover all areas of law. These all cover American law. For the time being, these titans are fighting in the biggest legal market in the world. But, there can be little doubt that the winners in this struggle will, in due course, be moving to the second largest. New York today: London tomorrow.

There are four reasons why, from the point of view of England and Wales, AI will be initially focused on the worlds of Allen and Overy or Clifford Chance, the City Leviathans, rather than CitizensAdvice or Hodge, Jones and Allen (to take a once legal aid firm at random). First, the American experience will be most relevant in the commercial field where law and practice will be very similar. Second, commerce is where the big money is (Allen and Overy is just one firm already to have its own ‘tech innovation space, Fuse). Third, the large commercial firms already have the level of clean data on clients and their cases which will elude many legal aid practices and advice agencies operating with management systems only a cut above those of ‘Lincoln lawyers’ working from their cars (though the impact of legal aid administration in England and Wales may mean that they have some legacy of better record-keeping). Fourth, the users of access to justice provision are likely to be considerably more uncomfortable with the use of digital communication and interaction than professional and business users. They will have disproportionately high levels of technological, cultural, language, cognitive and physical resistance to using digital communication.

Thus, may AI largely pass the access to justice sector by? Will it be left  as a messy old-fashioned, paper and oral based legacy of what, for the rest of the legal world (and, indeed, the non-legal world), is a former way of working? Will AI prove just too expensive; too difficult; too time-consuming to introduce when you are living hand to mouth dealing with virtually limitless workloads and reduced resources?

These are good questions. And the answers may well be: yes. But, let’s pull back. What do we mean by AI? This is a definition taken from an earlier post: ’A paper from Deloittes [states] …”a useful definition of AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.” Thus, behind AI lie a set of linked cognitive technologies that include (but are not limited to) natural language processing (the ability of the computer to deal with ordinary language), speech recognition, robotics and machine learning.’ 

Key to AI is the notion of machine learning. The Law Society paper defines this as the ‘incremental improvement of algorithmic predictions’. So AI can process data in various different forms (eg normal conversation speech) but can also make statistical predictions of likely predictions and learn from the success or failure of these. Thus, the Law Society quotes the example of Melody, an AI-powered chatbot doctor developed in China, which asks for your symptoms and posts them plus its tentative diagnosis to a doctor. You can see how this sort of system could be transferred to law – particularly if integrated with an existing chatbot (conversational interface) like Alexa or Siri. 

The importance of machine learning is why we might need some notion of strict definition. Guided pathways which take you through problems following static and pre-set paths are not AI. So, chatbot systems like the much hyped DoNotPay are not deploying AI. Nor is an advice site like Tellingly, both the Guardian and the Washington Post have carried recent articles headed respectively ‘The Rise of ‘Pseudo-AI’ and ‘’Tech’s Dirty Secret’ in which they document the use of humans to mimic AI: ‘In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself.In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.’

These detrimental points about common practice – along with the hype that accompanies AI in a legal tech market estimated at around $3bn in the US alone a year – should not deflect us from the main point. In the end, AI will make a massive difference to the legal market overall (and, of course, to society in general). The Law Society identified four likely implications for the legal profession as a whole:  ‘An  impact on the number of legal jobs, initially at lower grades of staff; Change in the nature of legal jobs, emphasising those skills that humans particularly excel at; and consequent changes in Legal Education and Training; Changing organisational structures and business models; Lower costs and changing fee structures.’

So, with all these qualifications, let’s look at how AI might affect access to justice. First, there is serendipity. There is every possibility that AI will work, like the Lord, in mysterious and unpredictable ways – or to paraphrase an old poem that became a hymn:

Deep in unfathomable mines

Of never failing skill

[AI will] treasure up its bright designs

And work [its] sovereign will.

Thus, one unexpected application of IBM’s Deep Watson is the manipulation of cost data on family cases in a practice in Brighton. This significantly lowered the risk of fixed fees and would, potentially, allow a potential client to get a useable quote from the inputting of key data about their case. Not flashy, but effective.

Second, the development of legal analytics could make the textbook redundant. Want to know the law on anything – including, say, housing disrepair? The answer will be not even in an e-book or some sophisticated updating mechanism of one. Information on law – both basic and advanced – will be individually prepared by AI for enquirers from statute through caselaw and practice. It is likely that areas of social justice or poverty law will be the last to be placed on the system but once this becomes the norm then they may well be included for the sake of comprehensiveness.

The pure law may be less relevant for the type of problem encountered in the access to justice field than elsewhere. A knowledge of administrative practice may be much more important in dealing with state agencies such as those administering benefits or immigration/asylum issues or for those acting for themselves. Thus, the target would be processing the level of data in a range of advice sites like the generalist CitizensAdvice, AdviceNow or specialist eg Shelter for housing advice. A fairly uncontroversial step for these agencies would be to move to the guided pathway model led by There would be a strong economic disincentive for organisations to take what might seem the logical route of integration and common packaging. The sites are seen as – and are- among  their organisation’s crown jewels. Probably only government inducement could make integration happen.

Even harder would be the step broadly summed up by the phrase ‘sleeping with google’ ie the integration of this type of information with a provider like Microsoft, Amazon or Google. But, why should we not aspire to  programme Alexa to answer benefit queries in simple language? The commercial advantage for the large providers is clear and probably helps to explain their interest in helping with portal sites as Microsoft is doing in the US and LexisNexis exploring here. After all, money might be made from referral to subsidise the provision of basic information.

But, this provides a major fork in the road for the current providers of access to justice information. Should they peacefully hitch their wagon to the promise of universal coverage from the new internet giants or see them, on the contrary, as invading Vikings and horde their wares? We have to see how this works out. We certainly also need to start talking about it.

Third, you have to reckon that the experiments with chatbots will develop into AI provided advice within particular sectors if not overall. In this context, it is hard not to deal with Joshua Browder, still a student at Stanford but adept at getting global coverage for his range of chatbots. He is occasionally given to statements along the lines of (in relation to his bot facilitating claims from Equifax for a security breach) ‘I hope my product will replace lawyers and, with enough success, bankrupt Equifax’. Richard Tromans came to a rather balanced view of his consequent impact which is not really to replace lawyers at all: ‘On one level what Browder has done is quite straightforward and without using anything that one would call ‘AI’ or any other advanced tech. A pre-set chat bot Q&A routine, a form that gets filled in, some cut and pasted instructions from a local small claims court, is not world-shattering tech. But … what Browder has done is brought it all together, he’s publicised it, he’s got people engaged, he’s helped people feel they can do something about getting justice … Add in some more advanced tech and who knows where this may go.’

So, chatbots are pretty basic at the moment but – like guided pathways – they carry out some of the groundwork for the application of AI and, even if some unified system does not emerge in the near future, people are likely to develop bots that genuinely use AI to help people through particular problems – perhaps particularly those effecting literate and tech savvy classes of user. If these prove effective, they could provide a challenge in some areas to the current sources of information.

And where does all this leave us? Well, the Law Society gave six potential areas of AI application: document analysis, contract intelligence, document delivery, legal adviser support, clinical negligence analysis, case outcome prediction and public legal education. Some of these – such as document analysis – are probably not so relevant in access to justice cases but public legal education and case outcome prediction are. 

If we take five years as a reasonable time period to consider, it seems reasonable to suppose that various lawyer support systems will gather momentum in terms of assisting with (and cutting the price of) client intake, case management, legal research and outcome prediction. They will be funded through the commercial drive to provide tech for lawyers. In some jurisdictions, the state – either directly through the courts or indirectly through semi-independent agencies running legal aid services – may fund some advance. These will be relevant for private practitioners or legal service NGOs. 

Potential sources of funding will largely determine development.  A possibly fertile field for funding and experiment by the state, private practice or the NGO sector (though grants of one kind or another) might be AI-assisted mediation in, for example, a high volume, high distress field like matrimonial breakup – building on the work done by the Rechtwijzer and its successor organisation Justice42 on online mediation in family cases. Here, the effects of the cuts to legal aid in England and Wales make very clear the impact of withdrawal of services and help the build up of political pressure to do something – which is unlikely, given the financial constraints on any future government, to mean the return of legal aid as it was.

It seems unlikely that large amounts of funding will be available within a five year time frame to overhaul advice provision. However, we might expect a cautious and halting experimentation with revolutionising the provision of advice through guided pathways, specific projects and chatbots – funded through foundations or national legal service providers like the US Legal Services Corporation. 

There is likely to be a cultural shift among lawyers as they rely more and more on AI sorted materials. A pattern is likely to emerge of clients and lawyers reaching an initial AI assisted verdict on a problem and then subjecting it to the lawyer or other adviser for professional input. That is bound to trickle down as an approach into access to justice areas – with unpredictable results. Indeed, in the absence of major guided change, serendipity may be a major factor in the application of AI to access to justice over the next five years – and, indeed, beyond. Let’s hear it for the modified version of William Cowper’s hymn.

The image at the head of this piece is provided by

Leave a Reply