ODR: more on key performance indicators for access to justice

Professor Richard Moorhead is Chair of Law and Professional Ethics at University College London and a prolific blogger. He tweets as @RichardMoorhead.

Here are some brief, non-comprehensive and preliminary thoughts on Roger Smith’s call for an ODR access to justice audit.  If it sounds like I am hedging, I am.  I think that Roger’s call for a debate is an important one, and I just want to nudge it on a bit, whilst admitting I have not sat down and thought about this in great depth. The points he makes to which I respond are in bold.

  1. Are there measurements available on the need for court and tribunal services by various elements of the population and how is it proposed that this digitalisation project will prove that it meets those needs?

A version of the Civil and Social Justice Survey (CSJS) would be useful to measure legal need and, over time, whether the population is changing in how it understands and responds to such need. I think the CSJS was discontinued post 2012ish [ie after the major cuts to legal aid] and in some ways it’s a fairly blunt instrument (it’s not great at distinguishing potentially justiciable needs from needs which ‘require’ dispute resolution for instance). The Legal Education Foundation seem to be doing some work with the authors of the survey (my colleagues Pleasence and Balmer), so maybe they are thinking of doing more here? An independently run survey could be a powerful barometer, and would give some idea of distribution of problems, demographics and the like. Within such a survey it might be possible to drill down or focus on some specific problems that were more clearly in need of ODR and could be examined more thoroughly to see how the OSC is working.

2. Are there coherent plans for integrating digital court and tribunal services with existing advice and information provision and how will this be measured?

Would this be dealt with by a combination of the CSJS approach (which looked at ease of finding advice/help) and evaluation of the front end of the OCS (i.e. user evaluation of the ‘solution finders’ stage)?

3. What provision will be in place to help those unable to use digital provision; how will its success be measured; and will users have the choice to use non-digital provision?

Measuring demographic uptake in 1 would help? As might deep dives on specific problems where digital exclusion is likely to be a particular risk. 

On Implementation…

How will individual users or their representatives such as the major advice providers be involved in the testing of provision and how will the project be divided up so that feedback can effectively be used and does the planned timetable allow for that?

Canada says they do this all the time with users first and representatives after users (representatives shout louder but have less to say, is the gist of why they do this, I think). The OCS could, perhaps should, commit to A/B testing, user testing, and to collating and analysing data for public scrutiny. There is a risk of going overboard with requests for too much detail and transparency (even public institutions need safe spaces to fail in); maybe an annual report on user testing – volumes, approaches, lessons; and making the data available perhaps for researchers and scrutineers.

Are their satisfactory arrangements for the integration of representatives into the process?

I am less worried about this. I don’t see why it will be a problem. But perhaps I don’t know enough about it. And witness Canda’s approach again: users first, experts second. Expertise is needed but not supreme or dominant. 

Do the proposed digital procedures comply with the principles of procedural fairness and fair trial?

I think one could relatively easily measure procedural justice with exit surveys for users (if they engage)? I think this would be a good starting point. Also work on those who engage and then disengage will be very important, id disengagement indicates flaws in the processes and approach of the OCS.

As important to procedural justice, in spite of what the likes of the estimable Tom Tyler says, is the quality of outcomes. Is there some process of review of outcomes at negotiation, mediation and adjudication stages? It ought to be possible to make anonymised data, or data in safe havens, available (say) to researchers or overseeing judges to allow outcomes to be reviewed periodically.

What is the target cost for user fees and what are the target numbers for each stage of the digital procedure and by how much is it intended that the new procedures increase usage?

I’m not sure I see the need for targets here, especially early on. In the early years they will be looking at growth in numbers generally, and perhaps whether there is a shifting towards earlier not later resolution. Although treating the latter as a target is dangerous in my view: we should not assume the OCS will work like the inverted triangle the Civil Justice Council, Richard Susskind, and others, hope for. If it does, great, but forcing dispute resolution into a particular shape risks failure: advice, resolution and adjudication is a complex system which requires a degree of modesty and circumspection in expectation setting. Once they understand how the system works then perhaps targets might be in order, but not yet.

On monitoring…

Are there satisfactory ways of monitoring the experience and satisfaction of users and the quality of procedures?

As or more so than is currently done? Yes I would have thought. Exit surveys. Short, even one question surveys in train during OCS processes and at the end, could build up data without straining the patience of users.

An interesting tension is the reluctance to collect data on demographics/protected characteristics (HMCTS are worried this would discourage engagement – they should test this as such data is crucial to understanding the system). Also there is the potential for a wealth of interesting stuff to be done if the system collects data on users and not just cases. This is a real lost opportunity in my view. If user data could be collected we would see who is in the system most, but also perhaps link with other data (education, criminal justice, health being the biggies). Obviously data protection needs to be thought about but in policy terms this could be massively important.

Are satisfactory provisions in place for the oversight, supervision and research of digital procedures?

No idea. Let me know!

How will the new arrangements be publicly reported upon.

I am guessing the best place will be your blog Roger, but let’s see….

If you have thoughts on this issue then please send them in – especially if you are working in Her Majesty’s Courts and  Tribunals Service from which anonymous contributions would be welcome and might help illuminate its public silence on access to justice indicators.  I will fold all the comments, public and private, into a revised proposal for access to justice performance indicators to cover the introduction of online courts.  Roger

Leave a Reply