MR: Regulators and courts need to control use of ChatGPT in litigation


ChatGPT: New York lawyer misled

Legal regulators and the courts may need to control “whether and in what circumstances and for what purposes” lawyers can use artificial intelligence (AI) systems like ChatGPT in litigation, the Master of the Rolls has said.

Sir Geoffrey Vos said there would need to be mechanisms to deal with the use of generative AI within the legal system.

“We may even hopefully turn it to the advantage of access to justice and effective and economical legal advice and dispute resolution.”

Addressing the Law Society of Scotland’s Law and Technology conference last week, Sir Geoffrey highlighted the recent case of New York lawyer Steven Schwartz, who used ChatGPT to prepare his submissions in a personal injury case.

Six of the cases cited were, in the words of the judge, “bogus decisions with bogus quotes and bogus citations”. This was despite Mr Schwartz asking the system to confirm their accuracy.

“Mr Schwartz was not uncovered because of the language of his brief, but because the judge in that case took the trouble to look up the cases cited. No doubt that does not always happen,” said the MR.

“The risks of litigants in person using ChatGPT to create plausible submissions must be even more palpable. And indeed such an event was reported as having happened in Manchester only a few days ago.”

The case showed lawyers could not use generative AI to cut corners. “I suspect that non-specialised AI tools will not help professional lawyers as much as they may think, though I have no doubt that specialised legal AIs will be a different story.”

He said Spellbook was already claiming to have adapted “GPT-4 to review and suggest language for your contracts and legal documents”.

The judge quoted an article by City litigation firm Enyo Law that asked ChatGPT to identify its own most valuable uses in dispute resolution – it said they were to assist lawyers with drafting, document review, predicting case outcomes to inform strategy, and settlement negotiations.

“Clients are unlikely to pay for things they can get for free,” said Sir Geoffrey, echoing comments he made in April. “Mr Schwartz would have done well to read Enyo Law’s article, which emphasises that the large language model is there to ‘assist’ the lawyers and needs to be carefully checked.

“Nonetheless, if briefs can be written by ChatGPT and Spellbook, checked by lawyers, clients will presumably apply pressure for that to happen if it is cheaper, and saves some of an expensive fee-earners’ time.”

In litigation at least, Sir Geoffrey went on, “the limiting factor may be the court or tribunal adjudicating on the dispute”.

He said: “One can envisage a rule or a professional code of conduct regulating whether and in what circumstances and for what purposes lawyers can: (i) use large language models to assist in their preparation of court documents, and (b) be properly held responsible for their use in such circumstances.

“Those will be things that the existing rules committees, regulators, and the new Online Procedure Rules Committee… will need to be considering as a matter of urgency.

The MR added that the way ChatGPT answered Mr Schwartz’s questions to confirm the cases it cited indicated two issues: that ChatGPT and AIs more generally “need to be programmed to understand the full import of a human question”, and humans using them “need to be savvier in checking their facts”.

This meant both asking more precise questions and programmers explaining to “the AIs they are creating what humans mean when they ask something as open textured as ‘is this a real case’”.

He said: “This requires careful and detailed programming beyond what might be required in other fields of activity…

“If GPT-4 (and its subsequent iterations) is going to realise its full potential for lawyers… it is going to have to be trained to understand the principles upon which lawyers, courts and judges operate.

“As Mr Schwartz found to his cost, the present version of ChatGPT does not have a sufficiently reliable moral compass. In the meantime, court rules may have to fill the gap.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


The lonely role of a COFA: sharing the burden of risk management

Compliance officers for finance and administration in law firms can often find themselves walking a solitary path. But what if we could create a collaborative culture of shared accountability?


Mind the (justice) gap: Why are RTAs going up but claims still down?

The gap between the number of road traffic accident injuries and the number of motor injury claims continues to widen, according to the latest government data.


Five key issues to consider when adopting an AI-based legal tech

As generative AI starts to play a bigger role in our working lives, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.


Loading animation