Artificial Intelligence and Access to Justice: an idiot’s guide

Let’s be clear about the title. The writer is the idiot, not you. At least, it feels like that. The legal press is full of debate about the impact of AI on the practice of the law. Until recently, it seemed safe to assume that those concerned with legal services to those on low incomes can leave the prevalent mix of angst and excitement to those in the commercial field. Poor people just do not have the money to finance the necessary investment. Increasingly, however, it is difficult not to worry that this is too complacent. There must at least be value in opening up an examination of the issue. This is a start.

You can spend weeks in books about AI but, perhaps unsurprisingly, two of the most approachable guides to artificial intelligence come form commercial sources. Thomson Reuters published a series by Michael Mills on Artificial Intelligence in Law: the state of play earlier this year. Deloitte published a still helpful Demystifying Artificial Intelligence  by David Schatsky, Craig Muraskin, &  Ragu Gurumurthy in 2014. Both take a common view on the definition of AI, the latter noting that ‘AI suffers from both too few and too many definitions’. Ultimately, both basically go for: ‘the theory and development of computer systems able to perform tasks that normally require human intelligence’. This remains a little elliptical but the key is, as a contributor to a Law Technology debate said: ‘computers/machines and software that are capable of learning. They get smarter with time and access to additional information, thus exhibiting behaviour that often eerily replicates that of a human.’

The Deloitte authors move from discussion of the general field of AI to the ‘cognitive technologies’ that have flowed from it. These include:

(a)computer vision – the ability to identify the content of visual images used in automated face recognition, medical imaging, and consumer shopping (photo your desired object and get an ad for it). If you have a bit of time, then watch Stanford professor Fei Fei Li, head of its artificial intelligence lab, give a TED talk  on the process of developing this field.

(b) machine learning – the ability of computers to discover patterns in data and make predictions without explicit instructions. This is the technology that, at the simplest level, infuriatingly blocks your credit card every time you go to France without telling your provider.

(c) natural language processing. This is what you would expect – the technology behind working with text in a way that humans do. IBM’s Watson has digested massive amounts of medical data on which it can give predictions of diagnosis.

(d) robotics – self-explanatory.

(d) speech recognition – apparent every time you use Siri.

(e) expert systems – added in Michael Mills’ analysis.

Many practical uses of these technologies bring them together in various combinations – as is apparent in all the work on driverless cars. Investment in these technologies is immense. IBM’s Watson, somewhat to the chagrin of is competitors hogs a lot of the coverage. IBM, no doubt, wants to get maximum return on its £1bn investment. But Google and Facebook are also major investors. And there are others as well wishing to make the point (see above under ‘chagrin’) that:

It is one thing to say that machine learning and AI will deeply impact legal practice. It is another to say that Watson will have a deep impact, or a more significant impact than other technologies. Watson is partially a machine learning offering, but there are many other machine learning offerings..

These technologies are going to transform our world with lively debate as to exactly how and whether jobs will be created or lost.  They justify book titles like The Second Machine Age. More immediately, the issue for us is how they will impact on the practice of law. The two big areas are legal research and electronic discovery. In the former, ROSS Intelligence is developing IBM Watson’s capacities in relation to bankruptcy law. Again, if you have time, it might be worth seeing ROSS’s Andrew Arruda promoting his product on Youtube. Thomson Reuters is also working with Watson – reportedly initially in the field of financial services regulation. In relation to the latter, there are a number of TAR or ‘technology assisted review’ products that can sift massive amounts of data.

Both of these are likely to have major impact in large corporate firms. The dire predictions of the end of lawyers are probably overblown. But, the numbers of paralegals and lawyers in corporate practice is bound to be impacted by these developments. Lawyers will not disappear: but they will reduce. Supporters of this process suggest that survivors will be more productive and their lives more fulfilling: we will see.

In the UK, discussion of AI is moving beyond the apocalyptic predictions of Richard Susskind and the harrumphing opposition of diehards who mutter ‘no surrender’ to the thought of technology. The doyen of legal commentators, Joshua Rosenberg QC, used one of his radio programmes  last month to cover the issue. Among his interviewees was the chief executive of Riverview Law who extolled his firm’s use of AI, particularly in the employment field in the aftermath of a takeover. And, also last month, the doyen of UK judges, Lord Neuberger, President of the UK Supreme Court, slipped a  paragraph into the  a speech on ethics and advocacy referring to AI and the Susskind thesis. He suggested that ‘The legal profession should … be preparing for the problems and opportunities which would arise from … and enormous potential area of development and one of the most difficult challenges will be to consider the potential ethical implications and challenges.’ He may be a little behind the curve here. Unlike a few years ago, there is a very serious grappling with the fact of AI at least among solicitors in the City.

So, should we in what we would still call in the UK ‘the legal aid sector’ grapple with AI or is it irrelevant to our clients? It is worth remembering that expert systems depend on major inputs of data. Michael Miles points out that the combined resources of IBM and Thomson Reuters are reported to be taking a year to get a beta version of their financial services package on the road. So, dreams that the nationwide advice service provided by the Citizens Advice service and other agencies might be transformed by purchase of the equivalent of 2001’s Hal or Apple’s Siri are a long way off. But, there might be areas of law which affect clients with high as well as low incomes. Employment and immigration would be two examples. One could certainly imagine that an AI approach to the former would be useful. Ever tried to get your head around the rules of parental leave sharing? Private clients – including for employment law, employers – might just be provide the necessary capital outlay.

The other potential large funder would be government. After all, the Department of Work and Pensions is engaged in the massive process of putting benefits online – not entirely successfully as yet. That would raise an ethical issue. I once worked for the Child Poverty Action Group, the nation’s biggest publisher of social security guides. Their sales sustained the organisation because advisers found government equivalents unsatisfactorily one-sided.

One could imagine advocates finding a use for an all-singing, all-dancing digest of relevant law that allowed them to produce skeleton arguments. This might well include whole swathes of public law which again would have ultimate beneficiaries who were both wealthy and poor. The cost of such a system would potentially be more easily met by advocates who combined commercial and public work – which in London would privilege chambers like Blackstone or Brick Court over those more focused more traditionally on human and civil rights. Even in the days of UK Brexit and US exceptionalism, one effect of technology is likely to encourage advocates and judges make even greater use of reference to foreign jurisprudence – simply because the system will take them in this direction.

A conference in Melbourne earlier this month under the title Access to Justice, Design Thinking and Artificial Intelligence included presentations of the Rechtwijzer. That raises an interesting question about whether the Rechtwijzer really represents AI. Does it have the  core capacity to reason for itself or is it better seen as a more mechanistic use of guided pathways. After all, one of the virtues of the programme is that very simplicity – the Rechtwijzer team pioneered the use of guided pathways that take the user through to a resolution of their problem, now assisted in version 2 by the possibilities of online intervention of  a living mediator. Ultimately, the definitional issue does not really matter. The Rechtwijzer can be AI or not: whatever it is, the programme represents a paradigm-buster for online advice provision. The core point is that we are on the cusp of using the interactive capacity of the internet. And it probably is true that guided pathways represent a much more attainable step forward in the field of legal services for poor people than the wondrous world of AI. But, hey, maybe I would not be too sure about that and these thoughts might all too easily seem – or prove – to be idiotic. Any responses: send them in.

 

Leave a Reply