When AI writes code, humans clean up the mess

When AI writes code, humans clean up the mess

AI coding tools are reshaping how software is written, tested, and secured. They promise speed, but that speed comes with a price. A new report from Aikido Security shows that most organizations now use AI to write production code, and many have seen new vulnerabilities appear because of it.

The study surveyed 450 professionals across the US and Europe, including developers, application security engineers, and security leaders. The results show that AI is moving fast inside software teams, but the security guardrails have not caught up.

AI code is now part of the stack

About a quarter of production code is written with AI tools, with higher usage in the US than in Europe. Most organizations have found flaws tied to this code, and some have seen those issues lead to incidents.

Even so, optimism about AI remains high. Nearly all respondents believe AI will one day write secure code, but few think it can do so without human oversight. The gap between expectation and reality is wide, and security teams are caught in the middle.

Concerns about AI-driven vulnerabilities are widespread. Leaders said the risks only felt real after their first AI-related incident. The technology saves time but can introduce subtle errors that surface months later.

When AI-generated code creates a problem, responsibility is hard to pin down. Over half of respondents said they would blame the security team, while many pointed to the developers who produced or merged the code.

That uncertainty is growing as AI takes on tasks once handled by people. Developers rely on these tools to move faster, yet the fallout from mistakes still lands with humans. Security leaders now have to manage that overlap, balancing automation with ownership.

“No one knows who’s accountable when AI-generated code causes a breach,” said Mike Wilkes, CISO at Aikido Security. “Developers didn’t write the code, infosec didn’t get to review it and legal is unable to determine liability should something go wrong. It’s a real nightmare of risk.”

Tool stacks keep expanding, and so does risk

The report found that teams using a larger stack of security tools often experience more incidents. Organizations that suffered breaches tended to run broader sets of vendor products. Each addition brought extra alerts, integrations, and delays in response.

Engineers spend hours each week triaging alerts. False positives drain time and lead some teams to delay fixes or ignore warnings, which increases risk over time. Tool sprawl also carries financial costs, with lost productivity from chasing bad alerts adding up quickly in large companies.

Organizations still separate their application and cloud security tools, creating gaps that raise the likelihood of incidents. Nearly all who operate disconnected stacks report duplicate alerts or missing data.

Bringing these functions together makes work smoother and helps teams respond faster. It also gives them a better view of where weaknesses sit in their code.

Developers as the first line of defense

Security outcomes often depend on the tools available to developers. Teams using products built for both developers and security staff report fewer incidents and faster remediation. When tools serve both groups, communication improves and fixes happen sooner.

The people side of security is another pressure point. Teams rely on a few engineers who hold essential knowledge. Losing even one can leave major gaps. That makes documentation, training, and retention as important as automation.

Europe prevents, the US reacts

European organizations report fewer serious incidents but a higher number of near misses, suggesting they catch problems earlier. Analysts link this to stronger regulation and more cautious development practices.

US teams move faster but accept greater risk. They rely heavily on AI-generated code and often manage fragmented toolsets. They also tend to postpone fixes more frequently than European peers. The speed advantage can easily turn into new forms of exposure.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.