Google expands bug bounty program to cover AI-related threats


Google has expanded its bug bounty program, aka Vulnerability Rewards Program (VRP), to cover threats that could arise from Google’s generative AI systems.

Google’s AI bug bounty program

Following the voluntary commitment to the Biden-⁠Harris Administration to develop responsible AI and manage its risks, Google has added AI-related risks to its bug bounty program, which gives recognition and compensation to ethical hackers who successfully find and disclose vulnerabilities in Google’s systems.

The company identified common tactics, techniques, and procedures (TTPs) that threat actors could leverage to attack AI systems:

  • Prompt attacks – An adversary enters a malicious prompt into a large language model (LLM) to influence the output in ways that were not intended by the application
  • Training data extraction – An attacker gains unauthorized access to and extracts the training data used to develop ML models, potentially compromising the integrity and reliability of those model
  • Manipulating models – Changing the behavior of the model to trigger pre-defined adversarial behaviors
  • Adversarial perturbation – A small, deliberately crafted input modification designed to cause the model to produce incorrect or unintended outputs
  • Model theft / exfiltration – Unauthorized access and exfiltration of details about Google’s models, such as its architecture or weights

“If you find a flaw in an AI-powered tool other than what is listed above, you can still submit, provided that it meets the qualifications,” the company noted.

Managing AI risks

At this year’s DEF CON AI Village, red teams had the opportunity to scrutinize popular LLMs for potential vulnerabilities and examine the possible misuse of generative AI features.

Microsoft has recently announced its own AI bug bounty program, rewarding bug hunters with up to $15,000 to find vulnerabilities in the company’s AI-powered Bing experience.

Google has also announced that it’s taking steps to ensure the safety of the AI supply chain by leveraging SLSA supply chain security guidelines and the Sigstore code signing tooling.



Source link