Jim Sandman, the President of the US Legal Services Corporation, told the opening session of its annual conference on technical issues that it was his best conference of the year. You could see that, as the head of an organisation once more facing a hostile presidency, this could well be true. In New Orleans for the conference, you had only friends – the technical enthusiasts in the legal services movement funded by the LSC. They could retain their optimism about the future even when Mr Sandman had to tell them that the Federal shutdown was going to impact their grants from next month.
Even for those without the care of high office in a politically conflicted society, the conference has plenty to recommend it. The numbers were capped at 500. There are more foreigners than ever before. The two days run with multiple choice parallel sessions and few plenaries – and those dominated by a challenging rapid fire tech format which gives a speaker 20 slides at automated 20 second intervals. It is meticulously well organised by the LSC staff. But the best – and, in my experience, unique – element is the people. The bulk are firmly rooted in the field, from organisations that have, or aspire to have, some of the $4m dispersed annually in the LSC’s Technical Issues Grants programme. That takes the weight of the conference away from those seeking to sell a product or give an academic analysis and puts it on those actually struggling to implement technological improvements in a very practical context.
The topic of artificial intelligence provides a good entry into exploring the unique vantage point of the conference. Let’s highlight three issues that came up in the day.
The heavyweight session of the day discussed AI full on and was addressed by IV Ashton of LegalServer, Abhijeet Chavan now of Tyler Technology (that absorbed Modria, the firm involved with the Rechtwijzer) and Justin Brownstone of Gravelytics. Between them, they gave a really thoughtful presentation. For a start, it was suggested that AI should really be renamed automated insight. Its results are dependent on adequate data and appropriate algorithms, both of which are susceptible to human error. Indeed, the retention of the humans was a theme – as a check on performance. The dependence on data means that AI systems will make errors and they will will be more likely in ‘extreme circumstances’ where the machine has little empirical data from which to learn. That seems a particularly pertinent observation when AI is applied to the often messy circumstances around a bail or parole decision.
An earlier session, led by Virginia Eubanks, examined another still largely unexplored aspect of the onward march of AI – the role of representation and advice when decision-making is made using automated systems. She had studied this in Indiana but the same issue of ‘the computer says no’ will be encountered everywhere. It may be particularly acute in England and Wales where adjudication may well be effectively automated through the appeal process. That is going to put an extreme weight on establishing the initial facts of a claim with an opportunity for the sort of self help assistance being developed to cover disability claims. People are likely to need to confront simplified government decision-making processes with initial fact gathering that draws out the complexities of their actual lives.
Finally, a consortium of organisations ifrom West Tennessee presented on their LSC funded project with Tom Martin of LawDroid to develop a chatbot that could provide something like an undefended divorce petition. The demonstration of a project about to go live brings out the practical issues not only in this project but more generally in relation to the current state of chatbots. On the one hand, these can be very effective and users seem to like this way of getting responses if they are quick, easy to understand and accurate. The project’s market research had confirmed that. But the interaction can still seem simplistic and language is a really key challenge.
There is also another point. The Tennessee organisations effectively took as the starting point for the project a couple going through an uncontested divorce with no complicated or hidden issues. The bot could then complete the appropriate form, though it could not as yet file it. But, if you cross the practical bridge of moving from passive information to active advice, then it can all get a bit more complicated. This is a pretty major hurdle in the US – though Jim Sandman announced some progress in lifting the ban on ‘the unauthorised practice of law’. If you start to think that a person approaching a divorce might need advice, it is much more complicated. For example, how do you address the uestion of informed consent by the economically weaker party (generally the woman)? How can you flag for issues like hidden domestic violence? How can you deal with complicated circumstances like children not of the marriage but for which one party may have accepted some responsibility? These kinds of questions, however, take you way beyond what a simple chatbot can deal with and into the territory opened up by much more sophisticated (and expensive) programmes like the Dutch Rechtwijzer. So, very practically, the Tennessee project confirmed what we know about chatbots. They are, in that nice legal phrase, a ‘necessary but not sufficient’ step towards the massive shift of making information and advice available on the net in an automated but interactive and individualised way. We need experiments like that in Tennessee to see how they work out and draw out the issues that arise. But this stage is likely to be but an interim one as we begin to demand more.
Well, the sun rises on the Big Easy and another day beckons. President Trump is apparently coming to town next week. It is nice to know that there is another America where one speaker could even fantasise that Justice Michelle Obama might be on the Supreme Court in twent years time. That would ease the lot of Mr Sandman’s successors.