Google’s Big Sleep AI Tool Finds 20 Open-Source Bugs
An experimental AI tool developed by Google has identified its first set of real-world security vulnerabilities in widely used open-source projects. The tool, internally codenamed Big Sleep, has uncovered 20 bugs, according to statements from Google’s security division.
The AI bug hunter, which is the result of a collaboration between DeepMind and Google’s internal security team Project Zero, is part of an ongoing initiative to explore how artificial intelligence can assist in identifying software vulnerabilities. Heather Adkins, Google’s Vice President of Security, confirmed that the AI tool flagged bugs across several open-source libraries, including FFmpeg, a multimedia framework, and ImageMagick, a graphics processing library.
The vulnerabilities discovered by Big Sleep have not yet been publicly detailed, as is standard practice in security research to prevent potential exploitation before fixes are available. According to Google, each issue was autonomously found and reproduced by the AI agent, though a human analyst was still involved to verify the findings before they were reported.
Transparency Trial to Address the Patch Gap
Alongside the Big Sleep findings, Google has also introduced a new disclosure policy aimed at addressing what it calls the “upstream patch gap.” This term refers to the time delay between a vulnerability being fixed by an upstream vendor and that fix being implemented in downstream products used by end users.
In a recent blog post, the company outlined a Reporting Transparency trial policy. While keeping its existing “90+30” model (90 days for vendors to fix the issue, with an optional 30-day extension for patch rollout), the new approach will now include an early disclosure step.
Approximately one week after a vulnerability is reported to a vendor, Google will publicly disclose:
- The name of the affected vendor or project
- The impacted product
- The date the report was filed
- The 90-day deadline for resolution
This change is intended to give downstream maintainers earlier visibility into security issues that may eventually affect their users. According to Google, this step will not include technical details or code that could aid malicious actors.
“There may be increased public attention on unfixed bugs,” the blog post acknowledged, “but we want to be clear: no technical details, proof-of-concept code, or information that we believe would materially assist discovery will be released until the deadline.”
The policy is also being applied to Big Sleep’s findings, meaning any vulnerabilities reported by the AI tool will follow the same transparency timeline.
Broader Context for Big Sleep
This shift in approach reflects a broader industry trend toward making vulnerability disclosure more accountable and time sensitive. Google argues that while security research has improved, long gaps between patch development and actual adoption still leave systems exposed.
The company notes that this delay often happens before a patch reaches end users, not after it’s published, but during the stage when downstream vendors are integrating the upstream fix. The result is that even known, fixed vulnerabilities may remain exploitable for weeks or months.
Google says the ultimate goal is to reduce the lifespan of vulnerabilities by closing these upstream delays. Still, the new policy is being introduced as a trial, and its effectiveness will be evaluated over time.
Related
Source link