A couple of recent articles chart the onward move of Artificial Intelligence towards prediction of judicial decision-making, something which – if it became mainstream – would be as relevant for ‘poverty’ as for any other area of law.
Bloomberg Law has just launched a Litigation Analytics tool. This seeks to predict the behaviour and decision-making of individual judges. So, if you have a case before, for example, a Mr Justice Scalia how is he likely to react as judged on the basis of statistical analysis of his record as against your hunch as an experienced lawyer? Well, mining data from the courts, company analysis and elsewhere, Bloomberg can predict motion outcomes, appeal outcomes, average length of time to resolution and case types. It also tracks the individual lawyers representing clients before individual judges. Data covers all federal judges since 2007.
Bloomberg’s Darby Green told the ABA Journal that she believes ‘we’re at an inflection point right now. Companies and lawyers are primed to start using predictive analytics more and more. Any lawyer will tell you that prior behaviour is not a guarantee of future behaviour but it can help make you better informed as you make decisions’. In a webinar on 1 November, Ms Green promises to show how to use Bloomberg’s new tool to: ‘Uncover relationships among law firms, companies and judges to inform litigation strategy; Understand a judge’s behavior when ruling on certain motions; Better predict possible litigation outcomes through data visualization’.
Bloomberg’s move has given rise to a certain amount of media observation on the emerging market for this kind of tool. Legaltechnology. com reported that ‘The launch will put Bloomberg in competition with litigation data mining company Lex Machina, which was acquired by LexisNexis in 2015 and through its Legal Analytics platform provides insights about judges, lawyers, parties and patents. The Silicon Valley company initially focussed on IP litigation but with LexisNexis’ backing is expected to significantly extend its offering and in September unveiled a Courts and Judges Comparator and Law Firm Comparator that instantly compare the court results and performance of both law firms and courts and judges in the U.S.’
The Bloomberg analysis can certainly reveal which judges are stingy at granting motions but the real use of such a tool would be where it can replicate the experienced advocate’s hunch that a particular judge might be susceptible to one particular line of argument over another. Our fictional Mr Justice Scalia might well, for example, have expressed views on how the constitution is to be interpreted which could sway how an advocate presents a case. Mining these is where the machine will come closest to rivalling the intuition and learning of an experienced advocate.
Meanwhile, back in Europe, a group of academics have had a go at predicting decisions of the European Court of Human Rights. They surmised that ‘published judgments can be used to test the possibility of a text-based analysis for ex ante predictions of outcomes on the assumption that there is enough similarity between (at least) certain chunks of the text of published judgments and applications lodged with the Court and/or briefs submitted by parties with respect to pending cases.’ Their analysis concerned decisions under Article 3 (torture and ill treatment), 6 (fair trial) and 8 (right to family life).
The academics got a predictive correlation of around 79 per cent. The strongest predictive element were the facts: ‘we observed that the information regarding the factual background of the case as this is formulated by the Court in the relevant subsection of its judgments is the most important part obtaining on average the strongest predictive performance of the Court’s decision outcome’. They realised, however, that you had to be careful. The judges sometimes were a bit casual in their statements of the law: ‘The relatively lower predictive accuracy of the ‘Law’ subsection could also be an indicator of the fact that legal reasons and arguments of a case have a weaker correlation with decisions made by the Court. However, this last remark should be seriously mitigated since, as we have already observed, many inadmissibility cases do not contain a separate ‘Law’ subsection.’
The practical message of this research for advocates would appear to be that there is considerable value in articulating the facts of any case in terms similar to successful precedents. Again, this is hardly rocket science.
So, what do we take away from these two studies? Powerful academic and commercial forces are engaging with the issue of AI for the purposes of judicial prediction. In terms of performance, it is still early days but, no doubt, there is more valuable insight to come.