The industry is entering a phase where code is being deployed faster than it can be secured, according to OX Security. Findings from the Army of Juniors: The AI Code Security Crisis report show that AI-generated code often appears clean and functional but hides structural flaws that can grow into systemic security risks.
Security teams are overwhelmed
OX analyzed more than 300 software repositories, including 50 that used AI coding tools such as GitHub Copilot, Cursor, or Claude. The researchers found that AI-generated code is not more vulnerable per line than human-written code. The problem is speed.
Bottlenecks such as code review, debugging, and team-based oversight have been removed. Software that once took months to build can now be completed and deployed in days. That velocity means vulnerable code reaches production before anyone can properly examine or harden it.
Even before AI, security teams were overloaded. The report cites organizations handling an average of more than half a million security alerts at any time. Now the pace of AI-assisted coding is breaking the remaining controls.
The anti-patterns
The study identifies ten “anti-patterns” that appear repeatedly in AI-generated code. These are behaviors that contradict long-established secure engineering practices. Some occur in nearly every project, others less often but still with serious consequences.
Among the most common are:
- Comments everywhere (90–100%) – AI models fill code with redundant comments that serve as internal markers to navigate context limits. It looks helpful but mainly supports the AI itself, cluttering repositories and revealing dependence on short-term memory rather than true understanding.
- Avoidance of refactors (80–90%) – Unlike experienced developers who refine and improve code, AI stops at “good enough.” It does not restructure or optimize, leading to growing technical debt.
- Over-specification (80–90%) – AI tools create narrow solutions that cannot be reused. Each variation requires new code instead of small adjustments, producing fragmented systems that are hard to maintain.
- By-the-book fixation (80–90%) – AI follows conventions without questioning them. It produces safe, predictable code but rarely finds more efficient or innovative solutions.
Other recurring patterns include a return to monolithic architectures instead of microservices, “vanilla style” coding where AI rebuilds common functionality instead of using proven libraries, inflated unit test coverage with meaningless tests, and phantom bugs where AI adds logic for imaginary edge cases, wasting resources.
Insecure by ignorance
The researchers found that AI code does not necessarily introduce more vulnerabilities like SQL injection or cross-site scripting. The danger is who is using it.
AI tools make it easy for anyone to create software, including non-technical users who lack security knowledge. These users often deploy applications without understanding authentication, data protection, or exposure risks. The report calls this “insecure by dumbness,” meaning functional code with missing safeguards because no one involved knew what was required.
Even experienced developers can fall into this trap. Once an AI-generated application runs, teams assume it is production-ready. Questions about data storage, access control, or internet exposure are skipped in the rush to release features.
Human code review was once the main control, but it cannot scale to match AI’s output. Manual review requires focus and judgment that simply cannot keep up with code generated at machine speed.
“Functional applications can now be built faster than humans can properly evaluate them. Vulnerable systems now reach production at unprecedented speed, and proper code review simply cannot scale to match the new output velocity,” said Eyal Paz, VP of Research at OX Security.
The report recommends embedding security knowledge directly into AI workflows. In practice, that means adding organizational “security instruction sets” to prompts, enforcing architectural constraints, and integrating automated guardrails into development environments. Reactive scanning and post-deployment detection will not be enough when code can be rewritten and redeployed in minutes.
The strategic shift for security leaders
AI will continue to speed up development. Human teams must therefore shift focus toward architecture, orchestration, and threat modeling.
Security leaders should expect their environments to resemble an “army of juniors.” AI agents can produce large volumes of functional code but need senior oversight to ensure that what works is also secure. Without that guidance, organizations risk filling production with fragile systems that expand attack surfaces.
Developers need policies on when and how to use AI tools, what review steps are mandatory, and how security checks fit into automated workflows. Training should emphasize prompt design, contextual awareness, and architectural thinking rather than syntax or debugging alone.




