AI has become part of everyday software development, shaping how code is written and how fast products reach users. A new report from Cycode, The 2026 State of Product Security for the AI Era, explores how deeply AI now runs through development pipelines and how security teams are trying to manage the risks that come with it.
Cycode surveyed 400 CISOs, AppSec leaders, and DevSecOps managers across the US and UK. Every organization said they have AI-generated code in their environment, and almost all are already using or testing AI coding assistants.

AI-generated code becomes the biggest blind spot
Ninety-seven percent of organizations are already using or piloting AI coding assistants, and all respondents have AI-generated code in production. Yet only 19 percent have complete visibility into where and how AI is used.
Most security leaders say their overall risk has increased since adopting AI. Mid-sized companies are leading the way in adoption, often because they rely on AI tools to extend smaller teams. One in three organizations say AI now produces most of their code, and a small portion report that more than three-quarters of their codebase comes from AI.
AI agents introduce code that may contain logic flaws or insecure patterns, and these can multiply fast. Product security teams are making oversight of AI-generated code a top priority for 2026.
Shadow AI expands the attack surface
The report identifies shadow AI as one of the most pressing security risks. Employees are using unapproved AI tools, plugins, and context protocols without formal oversight. These systems can process sensitive data but often bypass security reviews and procurement controls.
More than half of respondents cite AI tool usage and software supply chain exposure as major blind spots. Each model or integration acts like a new supplier with unknown origins. Without visibility into where code or data comes from, organizations lose confidence in the integrity of their products.
Researchers note that this is a supply chain one. Securing the code itself is not enough unless organizations also manage the systems and data pipelines that generate it.
Visibility and governance struggle to keep up
Only 19 percent of organizations report visibility into their AI use across development. More than half lack centralized governance, relying instead on informal or fragmented approval processes. This leaves gaps in oversight and accountability.
Product security teams are starting to take on governance and compliance roles to close those gaps. Over half now manage regulatory responsibilities, and some are introducing AI bills of materials to document models, datasets, and dependencies. This builds on the software bill of materials concept but focuses on transparency for AI components.
Research data suggests that without stronger governance, inconsistencies and duplication will persist, creating the same weaknesses that once enabled major supply chain breaches.
Productivity rises, but so does risk
AI tools are delivering measurable gains. Most organizations report higher developer productivity, and 72 percent say time-to-market has improved. But 65 percent also report increased risk.
Business leaders want to move quickly to capture value from AI, even when security controls are not in place. For many teams, the trade-off between speed and safety still favors innovation. The question for CISOs is how long that balance can hold as vulnerabilities grow along with productivity.
Convergence emerges as the preferred path
After years of adding more tools, security leaders are now looking to consolidate. Ninety-seven percent of surveyed organizations plan to merge or simplify their application security stacks within a year. Nearly half of product security teams measure success by how much they reduce tool sprawl.
Researchers say convergence, not just consolidation, is the next step. Combining application security testing, supply chain security, and application security posture management into a single framework allows teams to see and prioritize risk. This unified approach helps align speed with control.

Webinar: Redefining attack simulation through AI
