Guest post by Andrew Dunkley, head of analytics at insurance risk and commercial law firm BLM
Lord Hodge’s recent lecture at the University of Edinburgh is a welcome attempt to grapple with the legal implications of data-related technologies, and also with the practical implications of those technologies for the law.
The legal profession, judiciary, regulators and legislators will have to consider the issues Lord Hodge raises at length over a number of years. However, a few thoughts sprung to mind while reading the lecture.
Design decisions in AI
There is a widespread confusion about what is involved in building AI, particularly that it is ‘intelligent’ of its own accord and independent from human input. The idea here is that the AI is making decisions on its own and because of that it is hard to connect it to a legal person.
This is not correct.
When you are designing an AI model, there are a whole series of fundamental decisions you take that defines how the ultimate model works. These include ‘editorial’ choices about the training data you give the computer, the variables you tell it to focus on and the framework in which it operates.
This is the case even in the most advanced forms of machine learning, where the computer ‘learns from itself’.
The best example here is Google’s ‘Alphago Zero’, which randomly generated its own training data and then learnt from that. However, there are human inputs even here. Specifically, humans were still required to encode the ‘rules of the game’ – the system within which the AI learnt, the variables that were available for it to consider, and the bounds of permissible actions.
As a result, Alphago Zero cannot cheat – because it is not aware that an illegal move is an option.
Now, imagine that a rogue programmer had decided to enable Alphago to break the rules, by teaching the AI that in some circumstances an illegal move is permitted – Alphago would then learn to cheat if cheating led to the optimal outcome. However, it isn’t hard to see that the rogue programmer is responsible for the illegal moves.
When thinking about AI, it’s really important to remember that it is always designed by people, regardless of the methods used.
This process involves taking subjective design decisions that necessarily determine how the AI performs in practice, and what decisions it will take in what circumstances.
As designers, we do not get to step away from responsibility for the recommendations made by the machines we design.
The legal impact of design
Once we understand the extent to which all AI is designed by humans, and that their design choices are inextricably linked with the decisions that the AI takes, it becomes much easier to think through many of the potential challenges that Lord Hodge raises. For example:
- An AI will not be able to breach a contract unless the design team behind it has taught it that breach of contract is a valid course of action in some circumstances. The design team can therefore be held accountable for the breach.
- Because AI is designed and owned, we may not need specific legal personality – we can trace responsibilities for its actions and the relevant legal personality back to its owner. Where legal personality is required, the same effect can easily be simulated where required by creating a shell company to own the AI’s code.
- Questions of intent to create legal relations in contract can be bridged by realising that an AI that creates contracts was designed to enter into them – the intent to create legal relations arises from the design team (or the person procuring the design/operating the AI).
- It becomes possible to impose reasonable standards of care on the designers or on procuring/implementing parties, in a similar way that you would to a negligently designed machine. What counts as ‘reasonable’ will, as ever, be a question for the courts.
Blockchain and smart contracts
After AI, blockchain is probably the buzzword du jour. Suffice to say that the extent of the impact it will have is much less clear, not least because many of the things that can be done with blockchain can also be done using other more conventional security protocols.
For example, it is perfectly possible to design a smart contract that is secured using other means than blockchain. Parties engaging in smart contracts should carefully consider the strengths and weaknesses of the contracting platform they choose.
To the extent that rescission of a blockchain-based smart contract is hard from a technical perspective, that speaks as much to the design of the system as it does to any problem with the law.
Again however, smart contracts are perhaps not as new as they might seem. The vending machine analogy gets used a lot, but I can think of at least one smart contract vending machine that is literally used by millions each day – the ticket machine.
When I buy a railway ticket, I am entering into a relatively complex contract, including conditions of carriage. I provide details about the contract I want to the computer (destination, first class or standard, age, time of travel etc) to a machine, which uses this information to work out which contract is appropriate.
Upon payment to the machine, a contract is created that gives me an enforceable right. The law has no problem recognising that the contract is between me and the railway company and enforcing the agreement between us.
Suffice to say that it is currently unclear what impact blockchain or cryptocurrencies will have – whereas we can say with confidence that AI is already having a major impact on the world.
I’ll sign off on cryptocurrencies with a (slightly whimsical) observation – through the right of Scottish and Northern Irish banks to issue their own banknotes, we already have experience of issuing and regulating a currency that is not legal tender yet is still in general use.
I wonder if this could be used to form the basis for a workable privately backed digital currency? Answers on a postcard…
Conclusion
It is really positive to see members of the Supreme Court thinking about how to address the challenges that rapid technological change will pose for the law.
While I have set out a few differences of perspective above, my general response is ‘bravo!’ We need more judges engaging in this way.
However, this leads me to pick up on Lord Hodge’s comment that in his opinion “it is not practicable to develop the common law through case law to create a suitable legal regime for fintech. The judiciary does not have the institutional capacity to do so”.
I am just not sure that this position is sustainable. Technology in this area is already moving far faster than regulation.
With Brexit and other factors this will continue to be the case. I would suggest that the judiciary is being optimistic if it expects parliament to intervene before it is forced to address these issues through case law. If this happens then the judiciary needs to develop the institutional capacity.
Which means we need more senior judges engaging with the subject in the way that Lord Hodge has.
Leave a Comment