Peers call for national body to regulate use of AI in justice system


AI: Call for central register

Peers have called for the creation of a new national body to regulate the use of artificial intelligence (AI) in the justice system and elsewhere in the public sector.

The House of Lords justice and home affairs committee also recommended establishing a mandatory register of algorithms used by the public sector, which could issue penalties to those who refused to comply.

In the report Technology rules? The advent of new technologies in the justice system, the committee said it had uncovered “a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with”.

The committee said public bodies and all 43 police forces were free to commission “whatever tools they like” or buy them from companies eager to get into the “burgeoning” market.

“And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality – despite the fact that many of these systems will be harvesting, and relying on, data from the general public.”

The report argued that – without sufficient safeguards, supervision, and caution – advanced technologies used in the justice system in England and Wales could undermine a range of human rights, risk the fairness of trials and damage the rule of law.

While technologies like facial recognition had benefits – such as preventing crime, increasing efficiency “and generating new insights that feed into the criminal justice system” – the lack of controls and training were of concern, as were the dangers of algorithms embedding human bias that was already in the underlying data.

“Meanwhile, users can be deferential (‘the computer must be right’) rather than critical. The committee is clear that ultimately decisions should always be made by humans.”

Peers said the lack of a central register of AI technologies made it “virtually impossible” to find out where and how they were being used.

They said the algorithmic transparency standard being piloted by the government for public bodies should be extended in terms of the data collected and made mandatory, paving the way for it to become a register of algorithms in the public sector.

The register should be able to issue penalties to those who failed to comply and “user-friendly”, allowing users to find out how AI solutions were being deployed and details of certification by the new national body.

This should be independent, established on a statutory basis and have its own budget. It would set “minimum scientific standards”, which would be “transposed into regulations by secondary legislation”.

Peers said the new national body would “systematically certify technological solutions” following evaluation.

“No technological solution should be deployed until the central body has confirmed it meets the minimum standards. After a transition period, this requirement should retrospectively be applied to technological solutions already in use.”

Before advanced technologies were implemented they should also be subject to comprehensive impact assessments, including “considerations of bias”, weaknesses of the technology and “discursive consideration of the wider societal and equality impacts”.

Peers said their new regime should be introduced through primary legislation.

The report also identified a “significant and worrying body of evidence” that users of advanced technologies were “failing to engage, in a meaningful way” with the output of automated processes.

To guard against this in the justice system, training should be offered to lawyers, judges and others as part of their continuing professional development.

Baroness Hamwee, chair of the committee, said: “Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.

“Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


The lonely role of a COFA: sharing the burden of risk management

Compliance officers for finance and administration in law firms can often find themselves walking a solitary path. But what if we could create a collaborative culture of shared accountability?


Mind the (justice) gap: Why are RTAs going up but claims still down?

The gap between the number of road traffic accident injuries and the number of motor injury claims continues to widen, according to the latest government data.


Five key issues to consider when adopting an AI-based legal tech

As generative AI starts to play a bigger role in our working lives, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.


Loading animation