SRA: Lawyers “must not trust AI to judge its own accuracy”


AI: Very high potential to help firms

The Solicitors Regulation Authority (SRA) has warned solicitors not to trust artificial intelligence (AI) to “judge its own accuracy” and remember that existing AI “does not have a concept of truth”.

The regulator cited one law firm’s comparison of AI systems to “bright teenagers, eager to help, who do not quite understand that their knowledge has limits”.

In a Risk Outlook report on AI, the SRA said law firms must “supervise AI systems, and staff use of them”, to make sure they were working as expected and providing accurate results. Supervision should be able to “cope with the increased speed” of AI.

“Use systems to speed up and automate routine tasks, supporting rather than replacing human judgement.

“Remember that you cannot delegate accountability to an IT team or external provider: you must remain responsible for your firm’s activities.

“If you ask a system to summarise online information, you can ask it to give references. This should make it easier to check it is not hallucinating information.”

The SRA said ‘hallucination’, where a system produced “highly plausible but incorrect results”, could lead to AI drafting legal arguments, including non-existent cases.

“This might have happened because the system had learned that legal arguments include case and statute references in specific formats, but not that those references needed to be genuine.”

These errors could lead to consumers paying for legal products that were “inaccurate or do not achieve the results that they intended”, or law firms “inadvertently misleading the courts”.

On confidentiality and privacy, the SRA said particular threats included a staff member “using online AI, such as ChatGPT, to answer a question on a client’s case”, confidential data being revealed when it was transferred to an AI provider for training, and output from an AI system “replicating confidential details from one case in its response to another”.

AI could also be used by criminals. The SRA said this could be by creating highly realistic ‘deepfake’ images, and even videos which, “combined with AI-assisted imitation of voices from short samples” could make phishing scams harder to recognise.

“In the same way, AI might be used to create false evidence. There has already been at least one case of a respondent casting doubt on the other side’s case by suggesting that evidence against them was falsified in this way.”

On the positive side, AI had “very high potential to help firms, consumers and the wider justice system”, the regulator said.

As AI developed and as consumers became increasingly comfortable with it, the risk to firms “might not come from adopting AI, but from failing to do so”.

AI could increase speed, save cost and help to make legal reasoning clearer by showing how an AI algorithm reached its decisions.

“Firms that use AI that is well audited and built around transparency might be able to help support public understanding of legal services. This will also reassure consumers.”

Meanwhile, AI chatbots could “help firms provide services to clients at times when staff would not otherwise be available”.

On cost, the SRA said: “There are many sophisticated and expensive AI products on the market which are often aimed at, and in many cases designed specifically for, large corporate businesses.

“But increasingly we hear from small firms who are using a combination of ‘off the shelf’ and generic AI technologies to help their business and clients.”

The SRA said the speed, cost and productivity benefits provided by AI could help improve access to justice. “We intend to retain an open mind on the systems used, balancing protections for consumers with support for innovation.”

Paul Philip, chief executive of the SRA, commented: “So far, it has mainly been larger firms using AI. However, with such technology becoming increasingly accessible, all firms can take advantage of its potential.

“There are opportunities to work more efficiently and effectively. This could ultimately help the public access legal services in different and more affordable ways.

“Yet there are risks. Firms need to make sure they understand and mitigate against them – just as a solicitor should always appropriately supervise a more junior employee, they should be overseeing the use of AI.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Five key issues to consider when adopting an AI-based legal tech

As generative AI starts to play a bigger role in our working lives, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.


Bulk litigation – not always working in consumers interests

For consumers to get the benefit, bulk litigation needs to be done well, and we are increasingly concerned that there are significant problems in some areas of this market.


ABSs, cost and audits – fixing regulation after Axiom Ince

A feature of law firm collapses and frauds has sometimes been the over-concentration of power in outdated and overburdened systems of control.


Loading animation