Artificial Intelligence, Legal Services and Justice

Discussion of artificial intelligence is hard to escape. There are recently published macro studies like Max Tegmark’s Life 3.0  which opens with an apocalyptic (fictional) vision of a dystopian future from w which we are to be (putatively) saved by the (real) intervention of such as Elon Musk and Stephen Hawking to support the work of Tegmark’s Future of Life Institute. Its mission is to ‘mitigate existential risks facing humanity, particularly … from advanced artificial intelligence’. Less grandiosely, there is a current investigation of the House of Lords artificial intelligence committee which has attracted submissions from a wide range of institutions from an organisation behind Guide Dogs for the blind to legal futurist Richard Susskind. More parochially, the Law Society of England and Wales has just predicted the figure on likely job losses due to AI and there is a daily steam of legal press coverage of new AI initiatives. All this plus, here and there, indications that maybe all is not quite as good as some of the hype suggests.

We should get out of the way a definition of AI – a notoriously tricky task. The Government Office for Science in a paper on the impact of AI on the ‘decision maker’ stated that ‘Artificial intelligence is a broad term … More generally it refers to the analysis of data to model some aspect of the world. Inferences from these models are then used to predict and anticipate possible future events.’ A paper from Deloittes provides a more concrete definition: ‘a useful definition of AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.’ Thus, behind AI lie a set of linked cognitive technologies that include (but are not limited to) natural language processing (the ability of the computer to deal with ordinary language), speech recognition, robotics and machine learning. Traditionally, a distinction is made between ‘strong’ and ‘weak’ – the former, general intelligence, is illustrated by Hal’s ‘I am sorry, Dave, I can’t let you do that’ as it develops a ‘mind of its own’. Weak AI is its specific usage for set tasks.

For the Lords committee, Lexis Nexis provides examples from its own products which, if you forgive the self-promotion, help to bind the abstract definitions to legal practicalities: ‘LexisNexis UK considers artificial intelligence to be any system capable of performing tasks utilising some aspects of human intelligence such as logic, reasoning, learning and deduction. In our global business, we are investing in artificial intelligence to help benefit the legal industry, including in the following areas: • Assisted decision making: Lex Machina[1], our legal analytics platform, mines litigation data in the US to help attorneys prepare for litigation based on data trends. • Automated review: Our technology scans legal documentation[2] to review and optimise documentation through best-practice clauses, enhanced drafting and case citation checking. • Natural language research: Lexis Answers[3] utilises machine learning and natural language processing to make legal research easier to use and more efficient. • Analytical research: Ravel Law[4] utilises machine learning to provide legal research and insight from massive amounts of legal data.’

Peter Gunst, in an interesting recent article, provided an overlapping list of examples: ‘Until now, applications of AI remain limited to a number of specific applications, typically within a particular legal domain. Companies such as Kira use AI to automatically analyze the text of contracts, claiming to provide 20 to 90% in time savings without sacrificing accuracy. Lex Machina, a startup founded at Stanford Law School and purchased by Lexis Nexis in 2015, analyzes data from patents and judgments. The software can, on the basis of natural language processing (an AI subdomain), automatically divide judgements into relevant categories, identify arguments that have a greater chance of success with a particular judge, and summarize judicial decisions. Companies like ROSS Intelligence … allow lawyers to search case law more intelligently. Where traditional solutions are keyword-based, AI applications can understand and associate concepts, use this to answer complex questions, and deliver faster and more advanced results.’ He adds two examples from areas of more application to ordinary people:  ‘A more recent category of AI applications targets the consumer directly. Initiatives like DoNotPay and Belgian Lee & Ally promise an interactive experience where a consumer can get legal assistance from a digital assistant, often as part of a natural conversation. Today, these solutions typically rely on more rigid decision trees, and AI forms only a limited part of the application.’

DoNotPay has been the subject of some discussion. Commentator Richard Tromans  concluded: ‘On one level what Browder [its author] has done is quite straightforward and without using anything that one would call ‘AI’ or any other advanced tech. A pre-set chat bot Q&A routine, a form that gets filled in, some cut and pasted instructions from a local small claims court, is not world-shattering tech. But …  [he] has … brought it all together, he’s publicised it, he’s got people engaged, he’s helped people feel they can do something about getting justice. To conclude, this seems to be far less about technology and any kind of ‘robot lawyer’ and more about someone who feels passionately about justice doing a brilliant job encouraging other people to get justice for themselves too. And that has to be a good thing.’ So, the hype is useful but the actual delivery is not quite up to it.

The fate of the Australian Nadia project provides another angle. Actually, AI technology – even IBM’s much vaunted Watson – has not developed enough to provide an acceptable support for a sophisticated customer care provision answering questions about a government benefit. Response times were just too slow.

So, where does this leave us?

First, the development of AI raises a host of major general and ethical questions that need to be addressed and with which we, as citizens, must engage. Max Tegmark has thrown himself into doing that and has participated in drawing up the ‘Asilomar AI Principles’. Most of these are of general import but Number 8 is specific to ‘judicial transparency’ and states that ‘any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation audible by a competent human authority’. This is a reference to the exploration of automated ‘black box’ systems for determination of sentence or parole:  something closer than you might think. Wisconsin’s use of such a black box system known as COMPAS, Correctional Offender Management Profiling for Alternative Sanctions, was unsuccessfully challenged in the Wisconsin courts and the Supreme Court declined to hear the case in June. However, we should surely ensure that any decision derived from AI and affecting the public sphere, especially in relation to justice (law enforcement or warfare), is both explicable in human terms and something for which some real person is accountable.  Proceed no further, Robojudge.

Second, as LexisNexis and Peter Gunst make clear, AI is advancing into the heartlands of commercial practice. That is undoubtedly aided by the existence of large amounts of clean data; the potential international application of programmes; and the availability of money from lucrative practice. The practical consequences of this automation process are a reduced need for labour within the legal services industry as a whole. The Law Society estimates: ‘Over the longer term, the number of jobs in the legal services sector will be increasingly affected by automation of legal services functions. This could mean that by 2038 total employment in the sector could be 20% less than it would otherwise have been, with a loss of 78,000 jobs – equal to 67,000 full-time equivalent jobs – compared to if productivity growth continued at its current rate.’ As a consequence, the Society sees employment peaking at its 2016 figure and slowly subsiding. The reasoning behind the figures is not entirely clear but some such an impact seems intuitively justifiable.

Third, the impact of AI on services for those on low incomes will be slower as development is hindered by lack of clean data and major sources of funding. The optimistic prospect remains, however, that in this sector, AI will not only reduce costs but will open up the provision of major new sources of opportunity as a wider range of those on low income are able to take advantage of services that use the improvement in communication that AI can bring. In due course, there could be massive benefits from a legally orientated Siri or Alexa. What they are, and who will pay for them, we will have to see.

Leave a Reply