In recent years, and without many of us realising it, Artificial Intelligence has begun to permeate every aspect of our personal and professional lives. We live in a world of big data; more and more decisions in society are being taken by machines using algorithms built from that data, be it in healthcare, education, business, or consumerism.
Our Committee has limited its investigation to only one area–how these advanced technologies are used in our justice system.
Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline.
We began our work on the understanding that Artificial Intelligence (AI), used correctly, has the potential to improve people’s lives through greater efficiency, improved productivity. and in finding solutions to often complex problems.
But while acknowledging the many benefits, we were taken aback by the proliferation of Artificial Intelligence tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.
When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained?
Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening.
Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market.
And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven.
We learnt that there is no central register of AI technologies, making it virtually impossible to find out where and how they are being used, or for Parliament, the media, academia, and importantly, those subject to their use, to scrutinise and challenge them. Without transparency, there can not only be no scrutiny, but no accountability for when things go wrong. We therefore call for the establishment of a mandatory register of algorithms used in relevant tools.
And we echo calls for the introduction of a duty of candour on the police to ensure full transparency over their use of AI given its potential impact on people’s lives, particularly those in marginalised communities.
Thanks to its ability to identify patterns within data, AI is increasingly being used in ‘predictive policing’ (forecasting crime before it happens).
AI therefore offers a huge opportunity to better prevent crime but there is also a risk it could exacerbate discrimination. The Committee heard repeated concerns about the dangers of human bias contained in the original data being reflected, and further embedded, in decisions made by algorithms.
As one witness told us: “We are not building criminal risk assessment tools to identify insider trading or who is going to commit the next kind of corporate fraud … We are looking at high-volume data that is mostly about poor people.”
While we found much enthusiasm about the potential of advanced technologies in applying the law, we did not detect a corresponding commitment to any thorough evaluation of their efficacy.
Proper trials methodology is fully embedded into medical science but there are no minimum scientific or ethical standards that an AI tool must meet before it can be used in the criminal justice sphere.
Most public bodies lack the expertise and resources to carry out evaluations, and procurement guidelines do not address their needs. As a result, we risk deploying technologies which could be unreliable, disproportionate, or simply unsuitable for the task in hand.
A national body should be established to set strict scientific, validity, and quality standards and to certify new technological solutions against those standards.
In relation to the police, individual forces must have the freedom to engage the solutions that will address the problems particular to their area, but no tool should be introduced without receiving “kitemark” certification first.
Throughout this report, we assign the national body a range of other duties. Key among them must be the establishment of a proper governance structure with the ability to carry out regular inspections.
We were told of more than 30 public bodies, initiatives, and programmes which play a role in the governance of new technologies in the application of the law.
Inevitably, their respective roles are unclear, functions overlap, and joint working is patchy. Government departments do not co-ordinate. No clear strategic plan can emerge out of such confusion, nor any mechanisms to control the use of new technologies. Certainly, it cannot be ascertained where ultimate responsibility lies.
The system needs urgent streamlining and reforms to governance should be supported by a strong legal framework. As it stands, users are in effect making it up as they go along.
Yet without sufficient safeguards, supervision, and caution, advanced technologies may have a chilling effect on a range of human rights, undermine the fairness of trials, weaken the rule of law, further exacerbate existing inequalities, and fail to produce the promised effectiveness and efficiency gains.
We acknowledge the good intentions of users, but good intentions are not enough. Legislation should be introduced to establish clear principles applicable to the use of new technologies, as the basis for detailed supporting regulation which should specify how these principles must be applied in practice.
Local specialist ethics committees should also be established and empowered. The law enforcement community has particular powers to withhold liberty and to interfere with human rights. They therefore have a corresponding responsibility to maximise the potential benefits of technology, while minimising the risks.
We are clear that the human should always be the ultimate decision maker–as a safeguard for when the algorithm gets things wrong, or when more information is required to make an appropriate decision. It is all too easy for an algorithmic suggestion to simply be confirmed with the click of a button.
Individuals should be appropriately trained in the limitations of the tools they are using. They need to know how to question the tool and challenge its outcome, and have the correct institutional support around them to do that.
We believe that there should be mandatory training for officers and officials on the use of the tools themselves as well as general training on the legislative context, the possibility of bias and the need for cautious interpretation of the outputs.
As the use of new technologies is becoming routine, these proposed reforms will ensure that we maximise their potential while minimising the associated risks.
They would reverse the status quo in which a culture of deference towards new technologies means the benefits are being minimised, and the risks maximised.
And they would consolidate the UK’s position as a frontrunner in the global race for AI while respecting human rights and the rule of law.
1 See, for example, Constitution Committee, (22nd Report, Session 2019–21, HL Paper 257).
2 Artificial Intelligence Committee, (Report of Session 2017–19, HL Paper 100), paras 9–12
3 Office for Artificial Intelligence, Department for Digital, Culture, Media & Sport, and Department for Business, Energy & Industrial Strategy, National AI Strategy (22 September 2021): [accessed 1 February 2022]
4 For example, the definition in the Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts .
5 Information Commissioner’s Office, ‘What is automated individual decision-making and profiling?’: [accessed 6 February 2022]