There’s a pernicious cycle in cybersecurity that has repeated for decades. Products are released before they are properly secured — security-by-design principles are skipped — leaving security teams to manage the consequences. The general attitude is “We’ll fix it with a patch,” or “It will get fixed in the next release.” Despite the obvious failings of this approach, the practice continues and is getting worse. The 2025 Verizon Data Breach Investigations Report (DBIR) found that in the past year, breaches that started from exploited vulnerabilities grew 34%, and more than half of edge device vulnerabilities remained unremediated a full year later.
Now, the same pattern is occurring with artificial intelligence. AI systems are being rushed through development cycles and released with known limitations and inadequate safeguards. IBM’s Cost of a Data Breach report found that 97% of organizations that experienced an AI security incident lacked proper AI access controls. But even so, much of the vendor industry is arguing strenuously against guardrails and standards, insisting that safety requirements will constrain development and slow progress.
The risks of prioritizing speed and marketing position are manifesting in ways that make the old “penetrate and patch” cycle look almost manageable by comparison. AI is less understood than many disrupting technologies the security profession has dealt with before. It is evolving faster than defenses can adapt, and it is being embedded into critical systems before anyone has fully evaluated what could go wrong.
AI agents are the latest capability being rapidly deployed across the industry, but they introduce an internal threat that existing security architectures were never built to address. AI agents are entering internal development processes and supply chains before they’re fully understood, and unlike chatbots, they can create, delete, and modify files without human intervention. The mere existence of these agents introduces a new threat: autonomous actors with write access inside the perimeter. The 2025 Verizon DBIR found that third-party involvement in breaches doubled year over year, from 15% to 30%. As AI agents become another category of third-party dependency, that exposure is certain to grow.
At the same time, organizations are firing trained security staff and replacing them with AI tools or personnel who lack the domain-specific security expertise to evaluate whether AI-generated outputs are actually safe. The people being replaced usually understood the specific business context and threat landscape in which they were operating and adjusted accordingly. Losing that talent and experience creates security risks in its own right. AI doesn’t have the same institutional knowledge or context that those staff members did. More precisely, AI doesn’t have any actual domain expertise. Companies are quick to jettison that kind of talent, but it takes a long time to build it. All the while, those organizations will be accumulating technical debt.
Any one of these risks should give security leaders reason to reconsider the pace of adoption, but it’s also worth examining the assumption that regulation will slow down development. If we look at other technology-heavy industries, we see they have been able to regulate without reducing what they have accomplished. For instance, the international community established standards around genetic research, including rigorous containment protocols for gain-of-function research. Those standards have not prevented enormous progress in genomics and biotechnology, e.g., CRISPR-based therapeutics are now in clinical use. Those restrictions have, however, prevented foreseeable harms. Similar frameworks govern nuclear energy, commercial avionics, and spacecraft development. In each case, the question was never whether to advance, but how to advance without creating damage that would be difficult or impossible to undo.
Securing these systems starts with requiring security-by-design and safety-by-design for any AI tool entering an environment, with testable, verifiable evidence that these principles were built in from the start. Vendor assurances are not sufficient, both because many lack the in-house capability to evaluate their own security posture, and because they have no real incentive to be candid. Security leaders as customers should be asking for the same things they would demand from any other critical system: test results, audit trails, and documented security considerations.
Experienced human professionals must remain in the verification process because AI systems can hallucinate compliance audits as readily as they hallucinate secure code and references to standards. An AI agent told to build with security by design may report that it has done so, whether or not that is true; sycophantic behavior by AI systems is well-documented.
For the institutions already deep into AI projects, it’s worth doing an audit of what’s actually being used across the enterprise, as we know it is likely that not all of it is sanctioned. Upguard’s State of Shadow AI report found 81% of the general workforce and 88% of security professionals use unapproved AI tools at work. That increases the risk and the cost of something going wrong. As noted, organizations with unsanctioned AI tools pay materially more when breaches oc cur.
But none of these steps matter if security leaders aren’t willing to push back against hype-driven timelines and make the case to their leadership that responsible adoption isn’t slow adoption. The ACM Code of Ethics and Professional Conduct makes this explicit: professionals have a duty to anticipate and avoid harm. Security leaders can and should invoke that standard when making the case to their boards and leadership to take a carefully-considered approach to AI introduction. They have the expertise to anticipate where the sharp edges are in this technology. That expertise carries an obligation.
The choices made in the next few years will determine whether AI is built on a foundation that holds or on one that needs to be torn up and rebuilt at enormous cost. Organizations that invest in safeguards now are likely to produce more stable, trustworthy systems and earn greater long-term customer confidence. Companies that do not do that may find themselves locked into infrastructure they cannot easily fix, defending systems they do not fully understand, with few or no experienced employees to help them sort it out.
The competitive argument for moving fast without safeguards assumes that the cost of caution exceeds the cost of failure. The evidence says otherwise. IBM’s 2025 data breach report put the global average cost of a breach at $4.44 million, and organizations with high levels of shadow AI paid an additional $670,000 on average. Meanwhile, no company has yet demonstrated a lasting market advantage from being first to deploy an AI capability that later had to be retracted, patched, or publicly explained.
The organizations that will win long-term are the ones whose systems hold up under scrutiny from regulators, from customers, and from adversaries. Security leaders who make this case to their boards aren’t arguing for slower adoption. They’re arguing for adoption that doesn’t have to be repeated…or undone. Decisions made in haste are repented at leisure. Being the first to cause a preventable disaster is never a good policy.
About the Author
Eugene H. Spafford is a Distinguished Professor of Computer Science at Purdue University. During his 48-year career in computing—including 39 years as a faculty member at Purdue University — Spaf (as he is widely known) has worked on issues in privacy, public policy, law enforcement, software engineering, education, social networks, operating systems, and cybersecurity. He has been involved in the development of fundamental technologies in intrusion detection, incident response, firewalls, integrity management, and forensic investigation. He is a Fellow of the American Academy of Arts and Sciences (AAA&S), and the Association for the Advancement of Science (AAAS); a Life Fellow of the ACM, the IEEE, and the ISC2; a Distinguished Fellow of the ISSA; and a member of the Cyber Security Hall of Fame — the only person to ever hold all these distinctions.
Spaf can be reached online at linkedin.com/in/spafford.

