Lawyers face enhanced risk in “grim” AI-fuelled crime outlook


Europol: AI chatbots an invaluable resource for criminals

Lawyers and others face a future in which criminals can use artificial intelligence (AI) tools like ChatGPT to enhance their ability to commit fraud, Europe’s leading law enforcement organisation has warned.

In a report focusing on recent advances evident in the AI chatbot, ChatGPT, which has access to a vast database of human knowledge, Europol concluded that along with positive benefits to society, ‘bad actors’ will harness the technology to unleash a range of attacks.

Safeguards built into the AI by developers aiming to prevent its misuse would be easily bypassed by people seeking to commit crimes, Europol suggested.

Apps based on large language models (LLMs) like ChatGPT can ‘understand’ a range of human-like text, translate various spoken languages, interpret images, answer questions on a huge variety of topics, interpret images, and write code in most common programming languages.

Meanwhile, the measures lawyers currently use to avoid being the victims of cybercrime may no longer be up to the task – for example where previously fraud attempts might have been spotted due to the poor English grammar. This is likely to be a thing of the past, the report said.

“ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge… the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime…

“[This technology] may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to… adopt a specific writing style.

“Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.”

The ability of the chatbot was likely to get much greater in future, creating a “grim outlook” for digital crime, Europol predicted.

Efforts to build in safeguards by the developers were likely to fail, as criminals moved exploitable databases into the Dark Web and away from the eyes of law enforcement.

Supercharging the efforts of crooks was the manipulation of the chatbot’s skill at writing computer code for nefarious ends: “For a potential criminal with little technical knowledge, this is an invaluable resource.

“At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi.”

Three alarming possibilities might lurk in the future, the Netherlands-based international crime-fighting agency posited: first, that “multimodal” AI systems could arise which combined conversational chatbots with systems to create synthetic media – such as “highly convincing deepfakes”.

Second was that “dark LLMs” could be hosted on the Dark Web to provide a chatbot without safeguards and trained on particularly harmful data – which elsewhere have been identified as possibly including the knowledge to make chemical or biological weapons.

Third was the danger that LLM services might expose personal data – such as recorded private conversations – to unauthorised third parties.

Having considered these possible outcomes from malicious use of LLMs, Europol urged fellow law enforcement bodies, regulators and the AI sector to collaborate on improving safety measures and even work on whether they could use their own customised LLMs to “leverage this type of technology”, subject to respecting fundamental rights.

It concluded: “As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse, as well as to ensure potential benefits can be taken advantage of.

“This report is a first exploration of this emerging field. Given the rapid pace of this technology, it remains critical that subject-matter experts take this research further and dive deeper if they are to grasp its full potential.”

Separately, the UK government last week published a white paper on its proposed  framework for  regulating the use of AI, focusing on seeking to make existing regulators responsible for regulating AI developers in their individual sectors, while not stifling innovation.

It also recognised the risks of the misuse of the technology: “AI tools can be used to automate, accelerate and magnify the impact of highly targeted cyber attacks, increasing the severity of the threat from malicious actors.

“The emergence of LLMs enables hackers with little technical knowledge or skill to generate phishing campaigns with malware delivery capabilities.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Succession (Season 5) – Santa looks to the future

It’s time for the annual Christmas blog from Nigel Wallis, consultant at Legal Futures Associate O’Connors Legal Services.


The COLP and management 12 days of Christmas checklist

Leading up to Christmas this year, it might be a quieter time to reflect on trends, issues and regulation, and how they might impact your firm.


The next wave of AI: what’s really coming in 2025

The most exciting battle in artificial intelligence isn’t unfolding in corporate labs; it’s happening in the open-source community.


Loading animation