David Willink: Barrister, LAMB Chambers

This feature is part our #LondonCalling series, highlighting perspectives from across the pond


A fascinating piece of research was published recently. IBM’s AI machine, Watson, was trained by researchers at UCL, the University of Sheffield and the University of Pennsylvania to analyse textual evidence extracted from cases before the European Court of Human Rights, with the aim of predicting the outcome of individual cases. By analysing the text of past judgments extracting the sections pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved, Watson was able to predict the outcome of the cases presented with an accuracy of 79%. (The research paper is here: https://peerj.com/articles/cs-93/)

Incidentally, for those interested in legal theory, the analysis identified that judgements of the court are highly correlated to non-legal facts rather than directly to legal arguments, suggesting that the judges are ‘realists’ rather than ‘formalists’. This supports findings from previous studies of the decision-making processes of other high-level courts, including the US Supreme Court.

The authors of the research conclude:

Overall, we believe that building a text-based predictive system of judicial decisions can offer lawyers and judges a useful assisting tool. The system may be used to rapidly identify cases and extract patterns that correlate with certain outcomes. It can also be used to develop prior indicators for diagnosing potential violations of specific Articles in lodged applications and eventually prioritise the decision process on cases where violation seems very likely. This may improve the significant delay imposed by the Court and encourage more applications by individuals who may have been discouraged by the expected time delays.”

The research sets out which words were found to be the best predictors of violations, or non-violations, of the particular rights examined (rights to: protection from inhuman and degrading treatment, fair trial, and privacy). Some superficial amusement, or at least interest, can be had by picking out some examples. Predictors of violations of the right to protection from inhuman and degrading treatment include “Ukraine” and, bizarrely, “July”, whereas “June” was a predictor of non-violation. In the case of the right to a fair trial, “Moscow” was a predictor of non-violation, whereas “January”, “February” and “September” are all predictors of violation. And in the case of the right to privacy, violations were characterised by phrases such as “national security”, “crime protection” and “public authority exercise”, whereas non-violation was predicted by the word “Netherlands”.

Perhaps more seriously, publication of larger and more longitudinal studies in this area would appear to leave the system open to the advocate’s equivalent of “teaching to the test”: submissions to a court being peppered with words identified as being “effective”, but of marginal relevance to the case, in order to improve their statistical score. In this jurisdiction, written briefs to the court do not (apart from exceptional cases, where third party interveners may be invited to make written submissions) form part of the adversarial process; skeleton arguments are the closest we have, and it is almost universally accepted that the less skeletal they are and the closer they get to written speeches, the less effective they are. But one may speculate that technology like that described here, combined with the Online Court proposals I described in an earlier article, where pleadings and evidence will have to be submitted electronically, could be vulnerable to that sort of manipulation.

In reading the research, I encountered an article which posed the following rhetorical questions:

“Will computers revolutionise the practice of law and the administration of justice, as they will almost everything else? Will they help the legal profession in the analysis of legal material? Will they help make law less unpredictable?”

While you may recognise the sentiment of those questions, you may not recognise the quotation; I’m afraid I did not. Astonishingly, they come from an article written as long ago as 1963 (“What Computers Can Do: Analysis and Prediction of Judicial Decisions”; Lawlor, American Bar Association Journal, vol 49 no 4: http://www.jstor.org/stable/25722338); and that notes that the ABA had been devoting attention to these questions since 1952.

Maybe you’ll think me naïve for being surprised at this; maybe it’s just that the mother jurisdiction of the common law has a lot of catching up to do. In any event, it does no harm to be reminded that the first steps of the journey we are now undertaking were taken a long time ago.

 

David Willink is #OurManInLondon