Posted by Richard Burnham, co-author of How to Be an Ethical Solicitor, published by Legal Futures Associate Bath Publishing
Whether you call it lawtech, legaltech or artificial intelligence (if you really must), technology is quickly thickening the foundations of the delivery of legal services.
Our book, How to be an Ethical Solicitor, considered the ethical challenges faced by solicitors in the 21st century, and as lawtech sweeps across the profession at the same time as the Solicitors Regulation Authority increases its focus on self-regulation, a new set of ethical conundrums emerge for legal professionals to consider.
So, to what extent do practitioners need to concern themselves with the ethics of lawtech? It depends on what you mean by lawtech. If the majority of the legal press are to be believed, self-aware robot lawyers armed with ‘the blockchain’ are poised to imminently overthrow the profession and render solicitors and barristers entirely redundant.
For what it’s worth, you probably don’t need to panic (yet). Technology is certainly changing the legal landscape, but the profession’s Armageddon is not yet in sight.
For now, lawtech is simply a wide-ranging label that describes technology created with a view to reducing law firm overheads and/or increasing the availability of access to justice. You typically see it deployed within case management systems, document analysis algorithms, case outcome predictors, and chat bots designed to provide interim legal advice to consumers.
The ethical conundrums of lawtech are many, sprouting mostly from its complexity. One of the central tenants of justice has always been that the way in which it is arrived at must be transparent. This can be difficult with lawtech, as the algorithms employed can be complex and tough to understand – even by their creators. Solicitors using technology as an enhancement to their legal advice may find it less easy to account to their clients, purely because they may not fully appreciate the mechanics of the program that has provided them with advice to pass onto their client.
It is similar to the warnings given at law school about utilising precedent documents for clients when you do not fully appreciate the document’s legal workings.
Consider case outcome prediction software, which uses an algorithm to quickly rifle through judicial decisions and predict the success rate of a case by comparing the client’s factors (entered in by a solicitor) with those of the precedent bank.
If an artificial intelligence system scans through an online collection of cases, applying the variables relating to the client, and advises that a client is unlikely to be successful at trial, ought the solicitor recommend to the client not to continue to litigate the matter?
Is the solicitor satisfied that the data pool is such that they understand the decision made by the algorithm and the logic behind it? If not, is the solicitor acting in the client’s best interests by relaying that advice?
Even if the advice is considered to be erroneous in some way, it could then be difficult to assign liability, because it would likely be unclear whether any negligence was caused by the solicitor or the developers of the technology.
To complicate things further, lawtech applications often communicate with various sub-systems via the internet, so it may become harder still to apportion blame to one party given an error may be caused by one of many sub-systems. This would in turn make it difficult (i.e. expensive) for consumers to prove incidents of professional negligence.
Wider afield, society will need to decide whether it is comfortable with artificial intelligence having a greater role to play within the justice system itself.
Criminal courts have already piloted a system in which people can plead guilty to minor criminal offences by way of an online portal. Are we really that far away from an algorithm which weighs up judicial considerations and mitigation variables and produces a sentences in response to those pleas?
If so, what crosschecks will run against these ‘robot judges’? And will it always be easy to determine exactly how these algorithms have arrived at their conclusions?
The lawtech revolution is an exciting one, but much like the dawning of social media upon the legal profession, it brings with it significant potential for regulatory misconduct.
Practitioners may be best placed to consider lawtech as they would when enlisting the help of any other professional advisor, making it clear to the client how the information prepared was arrived at, and the extent, if any, that the solicitor can fully endorse that information.
Richard Burnham is a solicitor and co-founder of Eallium CMS, a lawtech team currently developing a lightweight case management system that is free for small firms with a view to increasing access to justice.
Leave a Comment