Europol warns cops to prep for malicious AI abuse


The European Union Agency for Law Enforcement, or Europol, has made a series of recommendations to the law enforcement community should prepare for the positive and negative impacts that large language models (LLMs) – the artificial intelligence (AI) models underpinning products such as ChatGPT that process, manipulate and generate text – will have on the criminal landscape.

In the report ChatGPT – the impact of large language models on law enforcement, Europol’s Innovation Lab compiled the findings of a number of workshops and expert criminologists to explore how criminals – not just cyber criminals – can abuse LLMs in their work, and how LLMs might help investigators in future.

“The aim of this report is to raise awareness about the potential misuse of LLMs, to open a dialogue with AI companies to help them build in better safeguards, and to promote the development of safe and trustworthy AI systems,” Europol said.

The Europol researchers described the outlook for the potential exploitation of LLMs and AIs by criminals as “grim”. In common with other researchers who have looked into the technology, they found three key areas of concern:

  1. The ability of LLMs to reproduce language patterns and impersonate the style of specific individuals or groups already means it can draft highly realistic text at scale to generate convincing phishing lures.
  2. The ability of LLMs to produce authentic-appearing text at speed and scale makes it well-suited for exploitation to create propaganda and disinformation.
  3. The ability of LLMs to produce potentially usable code in different programming makes it potentially interesting to cyber criminals to use as a tool in creating new malwares and ransomware lockers. Note that the cyber security community regards this impact as a little more long-term at this point, although this may change as the technology develops.

Europol made a number of recommendations for law enforcement professionals to incorporate into their thinking to be better prepared for the impact of LLMs:

  • Given the potential for harm, agencies need to begin to raise awareness of the issues to ensure that potential loopholes of use for criminal purposes are found and closed;
  • Agencies also need to understand the impact of LLMs on all potentially affected areas of crime, not just digitally enabled crime, to predict, prevent and investigate different types of AI abuse;
  • They should also start to develop the in-house skills needed to make the most of LLMs – gaining an understanding of how such systems can be usefully used to build knowledge, expand existing expertise, and extract the required response. Serving police officers will need to be trained on how to assess the content produced by LLMs in terms of accuracy and bias;
  • Agencies should also engage with external stakeholders, that is to say, the tech sector, to make sure that safety mechanisms are considered, and subject to a process of continuous improvement, during the development of LLM-enabled technologies;
  • Finally, agencies may also wish to explore the possibilities of customised, private LLMs trained on data that they themselves hold, leading to more tailored and specific use cases. This will require extensive ethical consideration, and new processes and safeguards will need to be adopted, in order to prevent serving police officers from abusing LLMs themselves.

Julia O’Toole, CEO of MyCena Security Solutions, said: “It’s not surprising Europol has issued this new report warning organisations and consumers about the risks associated with ChatGPT, as the tool has the potential to completely reform the phishing world, in favour of the bad guys.

“When criminals use ChatGPT, there are no language or culture barriers. They can prompt the application to gather information about organisations, the events they take part in, the companies they work with, at phenomenal speed.

“They can then prompt ChatGPT to use this information to write highly credible scam emails. When the target receives an email from their ‘apparent’ bank, CEO or supplier, there are no language tell-tale signs the email is bogus.

“The tone, context and reason to carry out the bank transfer give no evidence to suggest the email is a scam. This makes ChatGPT-generated phishing emails very difficult to spot and dangerous,” she added.



Source link