Posted by Mark Hughes, an associate solicitor at Legal Futures Associate O’Connors
Over the last year or so, there has been considerable hype about our legal jobs potentially being taken away by artificial intelligence (AI). With the passage of time though, we now seem to be on the path to acceptance of AI within our working lives.
As more of our familiar legal resources have started to embrace a generative AI overhaul, and new players have come to the market, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.
Licensing
Whenever procuring a new tech solution, it is important to understand the applicable licence provisions and ensure that they allow you to use the solution how you want to.
When purchasing a composite software solution (i.e. a tech solution featuring multiple software components), the overarching licence from the main provider is not the be-all-and-end-all.
Often you will discover that there are multiple third-party licences that will apply to cover the various components within the solution, and there needs to be a detailed check over what those additional licences enable you (or the primary service provider) to do.
There are currently AI solutions out there whose licences will require you to assign over the rights in the AI solution’s outputs. Beware of whose rights you might be giving away here!
Data protection
If there is anything that the GDPR has taught us, it’s that we need to be more vigilant over how, where, and when our data is stored. The same principle needs to be extended to the data being fed into an AI-based solution.
Where is that data being processed? What is the purpose? How long will it be stored? The GDPR implications in this area are extensive and are yet to be suitably tested within AI-based environments.
Whilst you need to understand where the input data goes, you also need to understand what happens to the output data.
Will this comply with the GDPR? Are you giving the rights away? Will the data remain within the solution’s server space ad infinitum? Where does the data go afterwards? Does the output data form part of the AI tools’ own data bank afterwards? If so, there is a real risk that your original data becomes irretrievable and/or used many times over by other users of the technology.
Confidentiality and regulatory requirements
As a legal services provider, you have multiple obligations towards your client to balance with your regulatory duties.
Considering what is in your client’s best interests is in our code of conduct and lies at the heart of our work, but whether you have your client’s permission to use their information for your intended purpose may not necessarily align with that.
For example, will your existing letter of engagement, privacy policy or cookie collection capture the right permissions to process your client’s information via an AI tool? Can you suitably track that information and ensure that the data is deleted when no longer used for its intended purpose?
Understanding an AI solution’s infrastructure is integral to knowing how safe your data is and whether you should be able to store sensitive details in there. Not only does this have the data protection implications mentioned above, but this also extends to the commercial sensitivity of the inputs you feed into the tool and/or the documentation that you make available to its data set.
Without appropriate measures in place, there is a risk that your client’s pricing structures (for example) could be made available to any other user of the tool, which could jeopardise confidentiality measures and/or any conflicts measures implemented within your firm.
Security itself is a huge risk, as the AI tool’s location and that of its inputs and outputs will affect how secure the information is stored.
Would you be in breach of your own confidentiality requirements by using client documents or data that falls within your contract’s definition of ‘confidential information’ and sharing it with a third-party tool? Does it compromise any of your Solicitors Regulation Authority duties? Can you be comfortable that you are suitably insured to use the AI tool in your own service offering?
The data sets
One of the greatest advantages that we are told about generative AI is its ability to draw on an incredibly large volume of data, and process that data within moments.
The largest risk here is that you are unable to control that source data and therefore risk having prejudices or misinformation built into your original data set, which are then reproduced in the output from the tool.
Conversely, limiting an AI tool’s data set to a potted resource inhibits the ability for that AI to operate and/or provide you with the benefits of large-scale data processing in a fraction of the time. There is also then the risk of carrying through any of your own mistakes or misinformation and compounding them into the AI tech’s immediate and future outputs.
Getting the balance right on the original data set is therefore incredibly important, and understanding the data set that your inputs are working from is therefore vital to reducing these risks.
This equally applies to the outputs, and so you should also know what happens to those data outputs and if they could form part of the AI tool’s own data set in the future.
Verification and justification
Watching a generative AI tool draft a solution in front of your eyes is still an impressive sight to behold. Knowing exactly how it has produced the answer can still be a mystery, however.
If you are going to rely on that output as your own, then any good lawyer would need to understand the rationale behind the proposed answer too, and so being able to access the logic or breadcrumb trail of an AI tool’s decision-making process is essential.
If an AI tool has used various resources to create its responses or documentation, then having sight of those resources is important to verify the responses being given.
There are an increasing number of examples of ‘hallucinations’ made by generative AI tools, and to de-risk your own outputs containing any such fabrications, you should have access to the source materials and ideally have various data markers behind each aspect of the output.
Whilst the computational power of AI tools to summarise complex and lengthy documents is a dream to legal research exercises, time needs to be allocated to verify the response and source materials.
Serious consideration needs to be given to whether there is a real time saving being realised or if the amount of time used to verify the responses reduces the efficiency of the tool.
So, how can you mitigate these risks?
The only real way to have assurance in your AI legal tech tool would be to develop one for yourself, should you have the internal development resource available. Not only would this provide assurance, but it will likely enable you to achieve a higher EBITDA multiple on sale as you become not just a law firm but also a legal tech business.
We have seen tactical acquisitions made within the legal market where the legal tech has been at the heart of the transaction.
In the absence of developing or acquiring your own AI legal tech, most of the risks highlighted above will ordinarily be covered by the main technology licence and its embedded third-party licences.
Getting a full understanding of the implications of these terms is essential when looking to use an AI-based tool – preferably before you click the ‘I agree’ button just to get the tool working.
Leave a Comment