Generative AI could be a “potentially useful secondary tool” for judges to use in the course of their work, according to new guidance from the senior judiciary.
However, all judicial office holders “must be alive to the potential risks” of it.
The guidance was produced by a cross-jurisdictional judicial group to assist judges, their clerks, and other support staff on the use of AI.
It was issued with the support of Baroness Carr, the Lady Chief Justice, Master of the Rolls Sir Geoffrey Vos, Sir Keith Lindblom, Senior President of Tribunals, and Lord Justice Colin Birss, deputy head of civil justice.
They said the guidance was the first step in a “proposed suite of future work to support the judiciary in their interactions with AI”, with a frequently asked questions document to support the guidance to follow next.
The guidance cautioned judges to “ensure you have a basic understanding of their capabilities and potential limitations” before using AI tools, such as appreciating that public AI chatbots did not provide answers from authoritative databases.
“As with any other information available on the internet in general, AI tools may be useful to find material you would recognise as correct but have not got to hand, but are a poor way of conducting research to find new information you cannot verify.
“They may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.”
But “provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool”, judges were told.
“If clerks, judicial assistants, or other staff are using AI tools in the course of their work for you, you should discuss it with them to ensure they are using such tools appropriately and taking steps to mitigate any risks.”
It listed potential uses of AI tools as summarising large bodies of text, writing presentations – e.g. to provide suggestions for topics to cover – and administrative tasks like composing emails and memoranda.
The guidance also warned that AI tools may make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist, so-called hallucination.
Just last week, we reported on how the First-tier Tribunal had decided that nine cases cited by a litigant in person in a tax case had been produced by generative AI and the guidance said it was “appropriate” for judges to ask unrepresented people if they have used AI and what checks for accuracy they have undertaken.
Judges also needed to be aware of “potential challenges posed by deepfake technology”.
Provided AI was used responsibly, there was “no reason” why lawyers ought to refer to its use, but this was dependent upon context.
“Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.”
The guidance also stressed the importance of confidentiality and privacy – especially when using a public AI chatbot – the potential for bias, and security risks.
Leave a Comment