Law firms and other businesses that are “over-excited” by the potential of artificial intelligence (AI) chatbot ChatGPT to boost productivity risk inputting sensitive and confidential client information, a data privacy specialist has warned.
Richard Forrest, legal director at data breach specialists Hayes Connor, said firms that used ChatGPT-style technology needed “very specific training for their staff, very careful management and very clear agreements with clients”.
Mr Forrest said although he was not working on any ChatGPT claims yet, as the technology was still in its “relatively early stages”, there were “not many restrictions in place”.
He said businesses would have to put “formal processes in place” if they wanted to use the chatbot and avoid claims under the GDPR.
Mr Forrest said the key was “what kind of information is inputted into it”, and a survey had shown that 11% of the data inputted by staff at businesses was sensitive.
He recommended that firms “assume that anything you enter could later be accessible in the public domain”, do not “input software code or internal data”, revise confidentiality agreements to include use of AI, create an explicit clause in employee contracts, hold company training on the use of AI, and produce a policy and employee user guide.
Mr Forrest said nobody at Hayes Connor was using ChatGPT because it was “not a tool that we can see the benefit from”.
He went on: “For many firms the potential risks and negatives will simply outweigh the positives it could bring. What are the benefits to be drawn from it, when the full extent of the risks is not known?
“The danger to me is an obvious one. The whole product depends on bringing in information to increase its use. If what is inputted is confidential in terms of clients, there is a danger that it could reappear at a later date in answer to a third party query.”
He said the information inputted into ChatGPT could not include not just confidential client data but commercially sensitive information.
“Everyone in a business is under pressure to deliver results. If they feel they can draw a real benefit from something, the danger is they become over-excited and fail to realise they are inputting private information.”
Businesses incorporating the chatbot into work processes were in “unchartered territory” in terms of GDPR compliance.
Those using ChatGPT without proper training may “unknowingly expose themselves” to data breaches, resulting in “significant fines, reputational damage, and legal action taken against them”.
Mr Forrest said the “large-scale” ChatGPT data breach at the end of last month, involving conversation history, billing and payment data, was a “warning sign” of what could go wrong.
He said that following the decision earlier this month by the Italian data authority Garante to request OpenAI to block the chatbot while it was investigated over privacy concerns, which was complied with by OpenAI, people should “take a step back” and “make a proper assessment” of the technology.
He added: “There will have to be a review and legislation to ensure that consumers and businesses are protected.”
Leave a Comment