Ethical risks of using algorithms in justice system under spotlight as Law Society launches commission


Delacroix: Predicting outcome of court cases could be double-edged weapon

Ethical, moral and legal risks from the growing use of algorithms are under the spotlight as the Law Society launches a public policy commission today on the impact of new technology on the justice system.

Professor Sylvie Delacroix, one of three commissioners, told Legal Futures she was particularly concerned by the use of algorithms in the sensitive areas of divorce and employment law.

“People have talked about creating an app for divorce similar to the one used for parking tickets. It may be tempting to do this for cost-cutting purposes, but being fined for parking and losing your child are altogether different matters.

“The difficulty is in drawing a line as to whether any of the people involved have vulnerabilities which are unlikely to be picked up by an automated system.”

Ms Delacroix, professor of law and ethics at Birmingham University, said a former student of hers had created an app in Canada to help people who had lost their jobs claim compensation.

“Someone who has lost their job may be in a very vulnerable position. I would never want them to be totally reliant on an app or a computer system. The app could be a triage tool, but I would want them to see a human first.

“An app could be fantastic in terms of enabling large numbers of people to exercise their rights to compensation, but it is crucial that we don’t let cost cutting compromise our commitment to law and equality.”

Professor Delacroix is joined on the commission by Law Society vice-president Christina Blacklaws and Sofia Olhede, professor of statistics at University College London.

Among other things, the commission will study the impact of algorithms and artificial intelligence (AI) on the police and prison service.

Examples cited by Chancery Lane included an algorithm used by Durham Constabulary to assess the risk of reoffending, the use of a crime prevention tool by Kent Police to map ‘hotspots’ of activity, and the use of facial recognition technology by the Metropolitan and South Wales police.

Ms Blacklaws said the design, sale and use of algorithms to deliver justice or maintain security “raises questions about unconscious bias, ethics and rights”.  

Professor Delacroix said the ability of AI to predict the outcomes of court cases could be a “double-edged weapon” if it contributed to “a kind of conservatism” by discouraging people from taking cases to court which algorithms said were unlikely to succeed.

“This could prevent the slow, organic evolution of legal systems,” she said.

However, just last week, the Lord Chief Justice, Lord Burnett, called the ability of computers to predict case outcomes “one of the most exciting developments of the age”.

Professor Delacroix said a distinction needed to be made between cases where “wholesale automation is justifiable if it is done properly” and where AI “should be used as an augmentation tool to help humans do the job better”.

She said that a first step towards regulation of algorithms would be ensuring there was a “meaningful explanation” as to how they worked.

“Until recently, it seemed to be about providing a source code, but sometimes these technological explanations are not so useful for the public.

“More useful would be a human explanation about what factors are used to train the system. Since these are human decisions and choices, should we not ask the people who design the systems to document how they are designed?”

Looking further into the future, Professor Delacroix said she was fascinated by the challenges posed by systems that evolved autonomously.

“Even if we make great efforts to align these systems for our purposes, the problem is that being autonomous, they are bound to evolve in a way leading to divergent trajectories.

“Humans are creatures of habit and change their views on what is morally acceptable. Robots are unlikely to develop habits but they are likely to change their moral views in a way completely different to us.

“Will these systems be intelligible to us in the choices they make?”

A paper by Professor Delacroix on algorithms and autonomous systems can be found here.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Succession (Season 5) – Santa looks to the future

It’s time for the annual Christmas blog from Nigel Wallis, consultant at Legal Futures Associate O’Connors Legal Services.


The COLP and management 12 days of Christmas checklist

Leading up to Christmas this year, it might be a quieter time to reflect on trends, issues and regulation, and how they might impact your firm.


The next wave of AI: what’s really coming in 2025

The most exciting battle in artificial intelligence isn’t unfolding in corporate labs; it’s happening in the open-source community.


Loading animation