Guest post from Jingchen Zhao, professor of law at Nottingham Law School
Artificial intelligence (AI) is taking the business world by storm with its capability to collect, filter and react to data rapidly and in many different ways.
Applying AI technology to law firms involves the use of computers, algorithms and big data to assist, support, collaborate or even duplicate lawyers’ behaviours and decisions so that law firms can function competently, successfully and with foresight in their business environment.
The interconnected world enhanced by technologies such as AI brings many positive changes for the ways in which law firms communicate with their customers, clients and business partners, offering the advantage of providing more an efficient and effective service without compromising quality.
Although AI has not yet been developed to a level where AI-empowered legal advice could fully replace human legal practitioners, the adoption of AI has the potential to reduce transaction costs and improve the accessibility of legal advice through the use of automated assistants, digital hubs or software to offer AI-powered legal services for vulnerable clients.
In collaboration with the Hungarian digital law firm SimpLEGAL, InvestCEE LegalTech Consultancy issued “AI in Legal Services – A Practical Guide” in December 2021, suggesting that AI offers new opportunities for digitalising legal services.
One of the most common ways of using AI in legal practice is to delegate certain tasks, especially where decisions need to be reached on the basis of a large quantity of data and legal practitioners are not capable of providing a swift response.
This kind of delegation can ease the tension between plausible hypotheses and the formal analysis of professional judgements by lawyers, allow the systematic study of issues in order to help legal practitioners make better decisions, and mitigate human limitations in terms of understanding complex data and making well-informed choices between the options available.
In addition to assistance with processing large quantities of data, efficient algorithms have empowered AI to make decisions at a near-instantaneous speed.
AI technologies are able to categorise solutions based on different criteria and priorities, assess the merits of each solution, and recommend a set of selected options for legal practitioners, who are then able to evaluate these solutions more efficiently and in a focused and informed manner.
This evaluation process can be made even more effective as algorithms “can be configured to calculate and inform the confidence level” of the selected options and assess the merits and disadvantages of each one.
In-house legal departments require more guidance in relation to the basic terminology used in the legal AI domain. When applying AI in a firm, it is also important to understand how this might change the firm’s risk profile, since AI can also be a disruptive technology, and accountable AI practice needs to be reinforced by regulatory insight to enable its sustainable development.
However, as yet no consensus has been reached on the most appropriate regulatory framework to achieve these goals.
The European Commission is taking a lead in terms of regulating AI globally, proposing a risk-based regulatory framework that involves determining the scale or scope of risks related to a concrete situation and a recognised threat.
This framework is also likely to be useful in unpacking the potential role and challenges of AI in promoting more accountable law firms and legal professionals, considering the benefits that accountable and sustainable AI could bring to law firms to protect their clients, particularly vulnerable ones.
By facilitating the use of AI services, the commission’s regulatory framework should help law firms to identify and meet the needs of clients who may have difficulty using legal services, or who may be at risk of acting against their own best interests.
An appropriate regulatory framework to promote sustainable AI by monitoring and mitigating the associated risks in legal practice is a pre-condition for using AI more comprehensively in the legal domain.
Instead of being a free-standing regulatory intervention, I believe that an ideal approach will be to construct a regulatory agenda and a control strategy to be combined with other control strategies across different social, economic and cultural contexts and tasks.
The design of this framework should encourage the participation of stakeholders with different expertise such as computer scientists, representatives from industrial organisations, active shareholders, specialist committees and counsel, and consultants or partners with expert technological skill sets, as well as international agencies.
Leave a Comment