M-RCBG Associate Working Paper No. 257
The Future for AI, Machine Learning and Assistive Technology in the Justice System: A Principled and Practical Approach
Rt Hon Sir Robert Buckland KBE KC
Introduction
The aim of this paper is to lay the groundwork for the widespread deployment of AI in justice systems with the aim of improving access to justice and shortening the time it takes to resolve cases. I look to my own jurisdiction, England and Wales as a place where I advocate some new approaches and governance structures, based upon some general principles laid down in this paper.
The paper will, first, identify the current challenges facing the judicial system and the potential of AI to solve them, second, lay down a set of principles to guide the use of AI to resolves cases, and third, address the potential unintended consequences of widespread AI use and measures that can be taken to alleviate them.
If I needed reminding of the point that AI technology is all around us already and is part of our system of policing and justice, I received one just last month as I walked, with justified trepidation, towards the Principality Stadium at Cardiff to see my beloved Wales rugby team get thrashed by England. As I did, I passed signs telling me that the South Wales Police were using facial recognition technology in the City Centre that day as part of their crime prevention and investigation measures. The use of this technology has been controversial to say the least. It was South Wales Police whose initial deployment of facial recognition landed them up in the England and Wales Court of Appeal, where it was held that its use was unlawful, on the basis that its potential impact on minority communities had not been evaluated prior to its use, and that this was a breach of the Police authority’s statutory duty under the Equality Act. In other words, the question of bias had not been properly addressed.
Since then, with the necessary work and assessments having been done, this technology is reappearing. The London Borough of Croydon will soon be installing permanent facial recognition infrastructure, in order, it says, to help detect crime. The arguments about bias go on. There is no specific modern legislative structure underpinning the use of this technology, and it may well be that this is an indicator of how things will develop elsewhere.
In the ensuing months, I like many others have struggled to keep up with the pace of change. I have benefitted from conversations with experts in AI like Professor Richard Susskind, who, rightly, take the view that the rise of AI isn’t just another “bolt on” to traditional practices, but a complete transformation of our lives and of our relationships with existing institutions and systems. It is in that spirit of transformation, then, that I approach my second and final paper.
A growing and real question for us now is: will people still want to use conventional court litigation systems if they can access private dispute resolution processes that are cheap and fast? Does increasing familiarity with AI mean that more and more people will readily consent to automated decision-making in justice? I think the answer is a resounding yes, but that the consequences for the existing system and our rule of law do not have to be a zerosum game.
Instead, state systems of justice can, at their heart, enshrine principles of fairness, human rights and independence of decision-making that will be the “kite mark” or gold standard of a justice system that has integrity. This will continue to be of particular importance when it comes to crime and punishment, family arrangements for children, and reputation-affecting disputes.
Let us not forget the practical realities facing many of our court systems. Looking at England and Wales, the current court backlog in the Crown Court is now over 70,000 cases. This is having a huge and negative impact on complainants, witnesses, and defendants alike, whilst the wider public interest in justice being seen to be done and relatively swiftly is not being served. In March 2024, an independent review of the laws and rules on disclosure of material in criminal cases headed by Jonathan Fisher KC recommended an overhaul of the law to reflect the necessity of using AI and assistive technologies to sift and catalogue material relevant to the issues in the case but not relied upon by the Prosecution.
The purpose of this paper is to try to set out some principles for the use of AI technology in our systems, and to propose a type of governance to oversee their safe and ethical use. We have now gone far past the stage where the pace and scale of change should be governed by current limitations in LLMs (large language models) such as hallucinations. This paper is written on the assumption that AGI (artificial general intelligence) will become a reality within a decade. The real question for us is whether to take small incremental steps now (e.g., automated decision-trees) or to make a great leap forward. The latter option may well be difficult to achieve because of the complexity of state procurement processes, but the former option risks placing justice systems in a constant state of “catch-up” with the rest of the world.