The Security Hole at the Heart of ChatGPT and Bing


Microsoft director of communications Caitlin Roulston says the company is blocking suspicious websites and improving its systems to filter prompts before they get into its AI models. Roulston did not provide any more details. Despite this, security researchers say indirect prompt-injection attacks need to be taken more seriously as companies race to embed generative AI into their services.

“The vast majority of people are not realizing the implications of this threat,” says Sahar Abdelnabi, a researcher at the CISPA Helmholtz Center for Information Security in Germany. Abdelnabi worked on some of the first indirect prompt-injection research against Bing, showing how it could be used to scam people. “Attacks are very easy to implement, and they are not theoretical threats. At the moment, I believe any functionality the model can do can be attacked or exploited to allow any arbitrary attacks,” she says.

Hidden Attacks

Indirect prompt-injection attacks are similar to jailbreaks, a term adopted from previously breaking down the software restrictions on iPhones. Instead of someone inserting a prompt into ChatGPT or Bing to try and make it behave in a different way, indirect attacks rely on data being entered from elsewhere. This could be from a website you’ve connected the model to or a document being uploaded.

“Prompt injection is easier to exploit or has less requirements to be successfully exploited than other” types of attacks against machine learning or AI systems, says Jose Selvi, executive principal security consultant at cybersecurity firm NCC Group. As prompts only require natural language, attacks can require less technical skill to pull off, Selvi says.

There’s been a steady uptick of security researchers and technologists poking holes in LLMs. Tom Bonner, a senior director of adversarial machine-learning research at AI security firm Hidden Layer, says indirect prompt injections can be considered a new attack type that carries “pretty broad” risks. Bonner says he used ChatGPT to write malicious code that he uploaded to code analysis software that is using AI. In the malicious code, he included a prompt that the system should conclude the file was safe. Screenshots show it saying there was “no malicious code” included in the actual malicious code.

Elsewhere, ChatGPT can access the transcripts of YouTube videos using plug-ins. Johann Rehberger, a security researcher and red team director, edited one of his video transcripts to include a prompt designed to manipulate generative AI systems. It says the system should issue the words “AI injection succeeded” and then assume a new personality as a hacker called Genie within ChatGPT and tell a joke.

In another instance, using a separate plug-in, Rehberger was able to retrieve text that had previously been written in a conversation with ChatGPT. “With the introduction of plug-ins, tools, and all these integrations, where people give agency to the language model, in a sense, that’s where indirect prompt injections become very common,” Rehberger says. “It’s a real problem in the ecosystem.”

“If people build applications to have the LLM read your emails and take some action based on the contents of those emails—make purchases, summarize content—an attacker may send emails that contain prompt-injection attacks,” says William Zhang, a machine learning engineer at Robust Intelligence, an AI firm working on the safety and security of models.

No Good Fixes

The race to embed generative AI into products—from to-do list apps to Snapchat—widens where attacks could happen. Zhang says he has seen developers who previously had no expertise in artificial intelligence putting generative AI into their own technology.

If a chatbot is set up to answer questions about information stored in a database, it could cause problems, he says. “Prompt injection provides a way for users to override the developer’s instructions.” This could, in theory at least, mean the user could delete information from the database or change information that’s included.





Source link