The Vibe Coding Security Gap

The Vibe Coding Security Gap

Security researchers are warning that the rapid mainstream adoption of AI-assisted “vibe coding” is introducing new risks into software development pipelines, with insecure code now being generated faster than many organisations can detect or remediate.

In new analysis from Unit 42, AI agents are now used in software development by 99 per cent of organisations, citing data from the State of Cloud Security Report 2025. While the use of AI coding tools has significantly increased development speed and productivity, researchers say it is also accelerating the creation and deployment of insecure code at an unprecedented scale.

According to Unit 42, the issue is not the use of AI itself, but how it is being applied. The analysis found that many organisations are deprioritising long-established secure development principles, such as least privilege and defence-in-depth, in favour of speed and functionality. This shift is allowing vulnerabilities, technical debt and supply-chain weaknesses to accumulate more quickly than security teams can address them.

The rise of “citizen developers” is compounding the problem. These users, often without formal software engineering or secure code review training, are increasingly deploying AI-generated code directly into production environments. Unit 42 said this trend is expanding the attack surface and increasing the likelihood that exploitable flaws will reach live systems.

Researchers warned that AI-assisted coding tools can amplify existing weaknesses in development processes, particularly where organisations lack strong governance, code review practices and access controls. In such environments, insecure patterns can be reproduced repeatedly, increasing systemic risk across applications and services.

To help address these challenges, Unit 42 has introduced a new framework known as SHIELD, designed to re-embed secure design principles into AI-assisted development workflows. The framework is intended to help organisations balance productivity gains with risk management by applying structured security controls throughout the software lifecycle, rather than attempting to retrofit security after code has already been deployed.

The analysis argues that without changes to development governance, AI-assisted coding could become a significant driver of future breaches, particularly as vulnerabilities propagate through shared libraries and software supply chains.

Unit 42 said organisations adopting AI coding tools need to reassess how privilege, access, validation and review are applied in modern development environments, especially as automation and autonomy increase. The findings suggest that security teams will need greater visibility into AI-generated code paths and stronger guardrails around how such tools are used.

The full analysis is available from Unit 42, and the research team has indicated it is continuing to examine how organisations can better align AI-driven development practices with established secure-by-design principles.





Source link