Artificial intelligence (AI) will open up the world of legal services to a new generation of consumers by helping them work out how lawyers can assist with their problems, according to Professor Richard Susskind.
The legal futurologist, who wrote his doctorate on AI in the 1980s, predicted a breakthrough would come through “systems that systematically ask their users questions – to help pin down and actually categorise and classify the problems or issues on which they want guidance”.
The difficulty with clients identifying the nature of their legal problems has dogged the growth of the sector and led to repeated findings of significant unmet legal need which has stifled demand.
The academic offered his thoughts following the success of ChatGPT at dominating debate around uses of AI in the immediate future.
He observed it was significant mainly for what it would become, as increasingly more capable technological developments entered the scene. “We are still in the foothills”, he added, but AI systems would in due course “outperform humans”.
ChatGPT would not set the legal world alight in the next couple of years, but future generations of the AI chatbot would, perhaps before the end of the decade, he believed. He described the chatbot as “the most remarkable development” he had seen in AI.
But he said most claims being made about its short-term impact on lawyers and the courts have been “hugely” overstated.
Meanwhile, lawyers and others should ask themselves how generative AI would affect access to justice and clients generally.
In conclusion, he gave an analogy with medicine where the greatest impact on the field might lie in prevention rather than cure, not simply replacing surgeons with robots.
“In law, the most exciting possibilities lie not in swapping machines and lawyers but in using AI to deliver client outcomes in entirely new ways – for example, through online dispute resolution rather than physical courts and, more fundamentally, through dispute avoidance rather than dispute resolution.”
Last year, Professor Susskind joined calls for the creation of a National Institute for Legal Innovation to help harness the potential of AI.
Separately, solicitors have responded to the government’s recent white paper on AI regulation by arguing for the importance of human lawyers advising clients, especially vulnerable ones.
The Law Society suggested that, while AI might be good for routine tasks, “it currently falls short in understanding nuanced client needs… and providing strategic advice to clients; these roles remain the domain of human, legally qualified professionals”.
The white paper recommended a light-touch framework, based on existing regulation and seeking to balance regulation with encouraging AI innovation.
However, recent comments by Prime Minister Rishi Sunak have suggested a possible future re-working of this approach, in view of the rapid growth of generative AI. At the same time, the European Union is pressing ahead with a far more restrictive regime based on the risk of harm, similar to existing data protection regulations.
The Law Society urged the government to take a “balanced approach to the development and application of AI in our sector, maintaining clear delineation of the human role and accountability, and understanding of high-risk, dangerous capabilities, significant harms, to enable the profession to capitalise on the benefits of these technologies”.
It also sought clarity on how a global AI regulation regime would apply while jurisdictions differed in their approaches, asking “how discrepancies across sectors and regulators will be mitigated and how our profession can extend services overseas in the face of differing AI legislation across jurisdictions”.
It underlined the importance of confidentiality to the legal profession and urged a clear assignment of liability in relation to harmful AI lawtech products.
The society called for “a blend of adaptable regulation and firm [risk-based] legislation” to provide a safety net while enabling innovation. Further, an ‘AI officer’ role should exist to manage AI systems in larger companies.
It argued that lawyers were well placed to be closely involved with the regulation of AI across the board.
Leave a Comment