Standards committee backs AI “regulatory assurance body”


Evans: Explainable AI is a realistic and attainable goal

A body that identifies gaps in the regulatory landscape on the use of artificial intelligence (AI) and advises individual regulators is needed as the technology develops, the government has been told.

The recommendation of the Committee on Standards in Public Life comes as legal regulators begin to address questions around the use of AI.

The committee said contributors to its review were concerned that public bodies were introducing AI “without a clear understanding” of legal requirements.

It said: “Concerns were most pressing in law enforcement and the judiciary, where new surveillance capabilities, such as automated facial recognition, will impact on citizens’ rights and freedoms.

“Legal experts told the committee that public bodies were often relying on a tenuous and piecemeal legal basis, often constituted from multiple sources, to legitimate the use of new technology.

“Contributors criticised the fact that intrusive and controversial technology, which has the potential to reshape society in radical ways, is introduced in this way.”

The committee, an advisory non-departmental public body of the government chaired by former MI5 boss Lord Evans, went on: “Public bodies should not implement AI without understanding the legal framework governing its use.

“Introducing algorithmic systems into the public sector without a clear legal basis not only undermines public standards, but also the rule of law.

“Judicial review may create legal clarity but a series of high-profile court cases investigating illegality by public bodies will undermine trust in what can be a potentially beneficial technology.”

Among its recommendations, the committee advised the government to give the Centre for Data Ethics and Innovation (CDEI), which already advises on the regulation of AI, a regulatory assurance role and put it on an independent statutory footing.

Its purpose would be to identify gaps in the regulatory landscape and advise individual regulators and the government on issues relating to AI.

The committee said most contributors to the review argued that a single AI regulator was impractical.

“A new AI regulator would inevitably overlap with existing regulatory bodies, who will already have to regulate AI within their sectors and remits. As such, the committee believes that the UK does not need a new regulator.”

However, given the complexity of AI and lack of expertise, it was unlikely that regulators would “be able to meet the challenges posed by AI” without guidance from a central body.

The committee said it supported the government’s intention to put the CDEI on a statutory footing to safeguard its independence.

“However, the specific roles and functions of the CDEI remain unclear. The government must clarify its purpose and assure that appropriate safeguards are in place so that it can fulfil its intended role as a regulatory assurance body.”

Referring to a recommendation in the Law Society’s report on algorithms in the criminal justice system that AI systems should be clearly explained in advance, the committee said this should apply across the public sector.

“Public bodies should publish a statement on how their use of AI complies with the relevant laws and regulations before they are deployed in public service delivery.”

Lord Evans said: “Explanations for decisions made by machine learning are important for public accountability.

“Explainable AI is a realistic and attainable goal for the public sector – so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Five key issues to consider when adopting an AI-based legal tech

As generative AI starts to play a bigger role in our working lives, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.


Bulk litigation – not always working in consumers interests

For consumers to get the benefit, bulk litigation needs to be done well, and we are increasingly concerned that there are significant problems in some areas of this market.


ABSs, cost and audits – fixing regulation after Axiom Ince

A feature of law firm collapses and frauds has sometimes been the over-concentration of power in outdated and overburdened systems of control.


Loading animation