The dual reality of AI-augmented development: innovation and risk

The dual reality of AI-augmented development: innovation and risk

When JPMorgan Chase CISO Patrick Opet published an open letter to software suppliers in April, he wasn’t just raising concerns — he was sounding an alarm. 

The numbers from the 2025 Verizon Data Breach Investigations Report should make every security leader lose sleep: 30% of breaches now involve third-party components, doubling from last year. But here’s the kicker that should really get your attention: this explosion in supply chain risk is happening just as AI begins writing a massive portion of our code.

An example that should terrify every CISO is Google. Right now, AI is writing 30% of Google’s code, while most security teams are still relying on tools designed for a world where humans wrote everything. This isn’t just a gap — it’s a chasm.

Cause for concern 

Large language models, machine learning models, and generative artificial intelligence are profoundly and increasingly transforming the software development landscape by creating many of the applications that businesses rely on daily. According to MarketsandMarkets, the AI coding sector is expected to grow from approximately $4 billion in 2024 to nearly $13 billion by 2028. Naturally, this marriage of AI and software development will usher in unprecedented efficiency gains and new innovative capabilities. Yet, despite these remarkable benefits, AI’s impact also includes novel security considerations that require specialized attention. 

We’ve seen this play out before. After 20-plus years leading security teams in energy and technology, I can tell you that every major security evolution follows the same blueprint: new technology creates new risks faster than our defenses adapt. AI development is no exception.

AI coding assistants like GitHub Copilot, CodeGeeX, and Amazon Q Developer fundamentally differ from human developers in critical ways. One of the biggest is that they lack developmental experience, contextual understanding, and human judgment, qualities that are essential when it comes to distinguishing secure code from vulnerable implementations.

AI tools also train on vast repositories of historical code, some of which contain known vulnerabilities, deprecated encryption methods, and outdated components. Next thing you know, AI assistants incorporate these elements into new applications, which introduce software supply chain security risks that traditional security tools, such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA), weren’t designed to detect.

What makes these tools insufficient is that they focus primarily on known vulnerability patterns and component versions. These tools cannot effectively evaluate AI-specific threats, such as data poisoning attacks and memetic viruses, which can corrupt machine-learning models and lead to the generation of exploitable code. While there are some newer startups in the AI security space, they too have similar limitations as legacy solutions related to file size and complexity. They also cannot comprehensively analyze a model for all its potential risks, such as malware, tampering, deserialization attacks on formats, etc. 

A final area where these traditional security tools fall short is that they typically analyze code during development rather than examining the final, compiled application. This approach creates blind spots where malicious modifications introduced during the build process or through AI assistance remain undetected. Examining software in its compiled state has become essential for identifying unauthorized or potentially harmful additions.

What next?

As organizations increasingly incorporate AI coding tools, they must evolve their security strategies. That’s because AI models can be gigabytes in size and generate complex file types that traditional tools simply can’t process. Addressing these emerging risks requires analysis capabilities as well as comprehensive software supply chain security measures capable of doing the following:

  1. Verifying the provenance and integrity of AI models used in development
  2. Validating the security of components and code suggested by AI assistants
  3. Examining compiled applications to detect unexpected or unauthorized inclusions
  4. Monitoring for potential data poisoning that might compromise AI systems

The marriage of AI and software development isn’t optional — it’s inevitable. Patrick Opet was right when he urged software providers and security practitioners to step up and address the new threats targeting the software supply chain. 

The organizations that adapt their security strategies by implementing comprehensive software supply chain security, which can analyze everything from massive AI models to the compiled applications they help create, are the ones that will thrive. 

As for those that don’t, they will become cautionary tales in next year’s breach reports.

Saša Zdjelar is the chief trust officer of ReversingLabs.

Written by Saša Zdjelar

Saša Zdjelar is the chief trust officer of ReversingLabs.


Source link