Vos: AI will redefine contours of professional negligence


Vos: Could an AI system become a member of a professional body? 

Professional negligence lawyers need to grapple with the question of whether it is negligent to use – or not to use – artificial intelligence (AI), the Master of the Rolls has said.

Sir Geoffrey Vos said that, under the law of negligence, professionals and other providing specialist services were expected to adopt widely recognised practices and procedures.

“The time will surely come in every professional field, where those widely recognised practices and procedures will include the use of AI.”

Examples could include the doctor who refuses to use an AI tool to diagnose cancer or an auditor who fails to use an AI tool to check for fraud in a company’s books.

“It seems to me that this prospect puts professional negligence lawyers, and perhaps tort lawyers in general, in a peculiarly interesting position.

“There will, of course, be many claims arising from the alleged negligent use of AI. I guess that lawyers will be working on difficult questions of the attribution of responsibility for losses caused by errors made by AI tools.

“At the same time, they will undoubtedly be faced with claims by those who suffer loss when a human, rather than a machine, advises them as to an investment or a financial decision, a medical diagnosis or taking a medication, building a bridge or wiring a power station, or so many other possible things.”

Looking even further ahead, says Sir Geoffrey, the advancement of technology “may force us to inquire as to the essential nature of a profession” and whether AI could actually become a professional or a member of a professional association.

“The law provides the social foundation for all our societies. When AIs are quicker and cleverer than humans, we will need to re-evaluate the infrastructure that the law provides for the delivery of advice and professional services.

“It will not be easy. But I would revert once again to my mantra. We must be guided by human values, justice and the preservation of a rules-based environment.”

Giving the Professional Negligence Bar Association’s annual address in honour of former Lord Chief Justice Lord Taylor – entitled ‘Damned if you do, Damned if you don’t’ – the MR said lawyers needed to be educated about the risks that AI posed and also “trained to know how to use and how not to use AI, and how to protect clients, businesses and citizens from those who will inevitably try to use AI for malign purposes”.

There was “a genuine risk… that lawyers and judges will move too slowly to understand and respond to AI and its effects”, bearing in mind the speed at which the technology was developing – meaning that problems such as hallucination, inaccuracy and bias would recede.

Economic reality also meant lawyers could not ignore AI due to clients being unwilling “to pay for what can be done without significant charge by a machine”.

There were limits to this. “Parents are likely to need human lawyers to advise about care proceedings, and criminal sentencing is likely to be a human activity, for many years to come.

“Indeed, lawyers will always, I think, be needed to explain the legal position to clients, even if the advice and decision-making is undertaken or assisted by machines.

“But subject to those caveats, I cannot see individuals and businesses accepting lawyers charging, for example, for armies of paralegals and assistant solicitors to check IPO documentation that a machine can check for nothing.

“I cannot see clients paying large sums for manual legal research to be undertaken when specialist AI-driven research tools exist, as some do already.”

A further problem on the mid-term horizon was that machines would likely in future have capabilities “that make it hard and expensive, if not actually impossible, for humans to check what they have done”.

Sir Geoffrey said: “This is where professionals need to begin to develop systems to make sure that humans can be assured that what machines have done is reliable and usable, as opposed to dangerous and unreliable. In the law, we will need to explore how the product of a machine can be effectively challenged.”

This would require “massive effort” from lawyers, regulators, rule-making bodies and government, a task that “has hardly started”.

“If an AI can produce legal advice that is, say, 98% reliable, that might compete favourably with the best of lawyers. But how can we know? By what parameters will we determine that a professional is using all due professional skill care and diligence when they use an AI that is, say, 99% accurate, but not when using one that is, say, 95% accurate?

“And, of course, accuracy cannot anyway be gauged on a linear scale. This may become a whole new science in itself.

“I believe that, if judges and lawyers continue to be driven by their commitment to the delivery of justice and the rule of law, they should not go far wrong. They will need to learn and to embrace change, but ultimately, we may still hope that the changes will be beneficial.”

Sir Geoffrey concluded: “Even if the professionals will be damned if they do use AI and damned if they don’t use AI, professional negligence lawyers will be in great demand – whatever happens.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Succession (Season 5) – Santa looks to the future

It’s time for the annual Christmas blog from Nigel Wallis, consultant at Legal Futures Associate O’Connors Legal Services.


The COLP and management 12 days of Christmas checklist

Leading up to Christmas this year, it might be a quieter time to reflect on trends, issues and regulation, and how they might impact your firm.


The next wave of AI: what’s really coming in 2025

The most exciting battle in artificial intelligence isn’t unfolding in corporate labs; it’s happening in the open-source community.


Loading animation