Measuring to know and measuring to manage: an ODR access to justice audit

The proposal for an access to justice audit of ODR is an important call and its motivating rationale is one with which I sympathise, writes Joe Tomlinson of Sheffield University and the Public Law Project. The UK National Audit Office’s (NAO) report on the HMCTS transformation project was an informative glimpse into a reform process which has been much less transparent than it ought to be. But, as Roger Smith points out, the NAO report was narrow in its terms of reference and access to justice considerations are subject to no equivalent systematic assessment.

Others who have responded to Roger’s post have engaged in the detail of his proposals. I am afraid that my response—and I ask for forgiveness for this—is rather more ‘academic’. I think Roger’s call speaks to a wider question that is lingering around the ongoing civil and administrative justice reforms in the UK: can we measure access to justice and, if so, to what end should we measure? Roger’s proposal provoked me to reflect more on this so I took the opportunity to reply to his post to set out how I see this, and where Roger’s suggestion fits.

My main point here is that we can and should measure access to justice (especially in relation to the ongoing reforms) but how we go about doing so should be informed by why we are measuring. In particular, we need to distinguish between two ends: measuring to know and measuring to manage. On the basis of how I see this distinction, I think Roger’s framework is too ambiguous for the former purpose but spot on for the latter. In this way, he lives up to his ambition: ‘[t]hese are intended as management tools not an academic research agenda.’ However, if Roger’s suggestions are to be used as a management tool, there remains an important issue: identifying the body or group that will conduct such an audit.

The measurability of access to justice

Access to justice is an ambiguous concept and its meaning and requirements are contested. It is a term which essentially gives expression to a bundle of claims or requirements that the user of the concept assigns to it. We all may have a sense of what others mean when they say ‘access to justice’ but if we were to examine each other’s understandings then we would eventually find disagreement.

The ambiguity in the concept can, and often does, give rise to the impression that access to justice is not as ‘measurable’ as other parts of the state, such as those related to spending. While we could devote an eternity to arguing about the meaning of access to justice (without finding a ‘right’ answer), that does not mean we should not try to measure it in some way. What this does mean, however, is that we need to pin down more specific objectives, that exist under the rubric of access to justice, that we should measure. As I see it, how specific these objectives ought to be depends on why we are measuring access to justice.

Measuring to know

If we are to measure access to justice to know how the system works/generate clear evidence, we need to think in terms of precise outcomes/variables that are being measured. If we do not do this, there is a risk that we simply magnify and multiply ambiguity, rather than measuring the reality precisely.

As I have argued in a recent Public Law Project and UKAJI report (written with Robert Thomas), I think it is essential that we measure to know in respect of access to justice in the ongoing HMCTS reform progamme. More data on access to justice will enable better understanding, better learning, better design, and continuous improvement. More data can also help us analyse and validate the implementation of reforms by providing robust and timely insights into how they the reforms are operating and what, if any, changes are needed. For this kind of measuring, we need to refine precise objectives to be tested.

Something more precise than Roger’s proposed objectives are required for this purpose. Take, for instance, one of Roger’s important questions: ‘do the proposed digital procedures comply with the principles of procedural fairness and fair trial?’ Most people (I hope) would agree this is an important issue but the concept of procedural fairness is another of those ambiguous, contested concepts—like access to justice—that are used flexibly.  There are a variety of ways of thinking about procedural fairness, and the question of whether a system or process is procedurally fair does not admit just one answer or just one way of reasoning.  As an administrative justice researcher, I see (at least) four broad and often overlapping modes of thinking about procedural fairness regularly in contemporary debate. We can (roughly) break the thinking up as follows:

  • Thinking morally about procedural fairness: asking or arguing about what provides the moral basis for procedural fairness;
  • Thinking legally about procedural fairness: asking or arguing about what is the posited law on procedural fairness and how it can be applied to a particular situation or system;
  • Thinking sociologically about procedural fairness: asking or arguing about whether a process is being experienced in a procedurally fair way; and
  • Thinking constitutionally about procedural fairness: asking or arguing about what constitutional theory tells us about procedural fairness.

All of these ways of thinking take us to slightly different understandings of procedural fairness. If any access to justice measurement is to be used in the ODR context, it will be important to avoid just layering ambiguity on ambiguity by determining, precisely, the outcomes to be tested.

Measuring to manage

Measuring access to justice to learn how systems are working is one thing, using an access to justice audit as a ‘management tool’ is a very different enterprise (though they can, of course, be complementary endeavors). As mentioned above, this—and not measuring to simply know—is the purpose of Roger’s framework: ‘[w]e are aiming for something easily applicable. These are intended as management tools not an academic research agenda.’

Performance management by ‘audit’ has its well-known benefits—the NAO report is a good example of how this approach can keep pressure on public bodies effectively. Yet this technique also has its limits. In his recent book on The Tyranny of Metrics, for example, Jerry Z. Muller outlines how audit-type processes often slip ‘from measuring performance to fixating on measuring itself.’ This, Muller argues, ‘distorts and distracts’ and leads to ‘gaming’ and ‘teaching to the test’. One key point to be found in Muller’s analysis is that precision is not always necessary or helpful when using measuring to manage—indeed precision can be counterproductive in this context. As a tool designed to measure to manage (or, to put that another way, create some form of accountability in the form process), Roger’s list of questions [now amended as to content but not as to purpose] is a very good framework indeed.

The difficult task is identifying which bodies or groups may be well-placed to take up such a task. One tentative suggestion is the Civil Justice Council and the Administrative Justice Council. Both bodies are small compared to the NAO, but perhaps they can have an important role in making sure that such questions are—at very least—being asked.

Joe Tomlinson, lecturer University of Sheffield and research director Public Law Project

Leave a Reply