Call for legal AI to have “ethical black box” to explain decisions


AI: sociologists should be involved

Artificial intelligence (AI) should be developed in conjunction with a wide range of non-technical specialists, while an ‘ethical black box’ showing how a system made particular decisions may also be needed, according to an academic.

He argued that if legal use of AI reflected the worldview of only scientists and engineers, and left out “social and cultural” perspectives, it could damage public trust in the law.

Siddarth Peter de Souza, a PhD student at the law faculty of Humboldt University in Berlin, writing in the Journal of the Oxford Centre for Socio-legal Studies, warned that, without transparency on the assumptions that have gone into the AI’s construction, there was a danger that human prejudices could be buried in the systems that threatened justice.

He pointed out that AI was now being used in the fields of legal research, document review, e-discovery, and predictive analysis.

Each of the platforms was “designed to improve accuracy in legal research, reduce uncertainty and risks in terms of strategic decisions and save time and costs by enabling lawyers to spend more time on strategic tasks”.

He referred, for example, to a 2016 exposé of an algorithm used by US judges to determine the recidivism of a criminal defendant, which found that black defendants were judged to be a higher risk than in fact they were, whereas whites were thought to be at less risk than was the case.

A key problem was the sheer complexity of the AI products in use – such that even their creators found it difficult to understand. This made it vital that a method was found for the systems to explain transparently how they arrived at a particular conclusion, he argued.

One solution was “introducing sociological insights”. He added: “An argument can be made that by diversifying the pool of developers to include other disciplines, such as sociologists, designers, historians, and psychologists, a multiplicity of views will be brought to the table…

“Introducing a plurality of views would ensure a more balanced outlook on the use, development and management of data and methods that are being used to build the AI-driven legal products.”

Another possibility was to build an “ethical black box” into AI systems to “establish a process for discovering how and why a robot acted in a particular way, similar to the way in which a flight data recorder tracks and transmits internal data…

“Robots will be making decisions that often require a moral compass, and introducing such a framework would allow for accountability and transparency in their functioning, in addition to public trust in their processes.”

Mr de Souza concluded: “The framework and algorithms that go into designing the processes and technologies of AI products [must] adopt elements of social, ethical and moral reasoning, because the implications of the decisions of many of these products are entering into spheres that consist of assessment, appraisal and judgement, with profound implications for humans…

“Addressing the social will allow for a more holistic consideration of the increasingly critical functions performed by technologies in the legal domain…

“Unpacking the ‘black box’ of these technologies can make them more trustworthy, understandable, and accountable.”

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Embracing AI: The future of law firms

AI is set to fundamentally change how law firms operate, bringing about new efficiencies, enhancing strategic insights, and ultimately transforming the way legal services are delivered.


CMA guidance on unregulated legal services must be applauded but…

There is little doubt that, with a staggering 3,800 unregulated providers of such legal services, the recent CMA action and guidance was required.


The rise of the agent

We believe AI agents are going to represent the biggest change to the way in which the general public interact with professional services business for generations.


Loading animation