This is a predictions blog. We know, we know; everyone does them, and they can get a bit same-y. Chances are, you’re already bored with reading them. So, we’ve decided to do things a little bit differently this year.
Instead of bombarding you with just our own predictions, we’ve decided to cast the net far and wide. We’ve spoken to cybersecurity experts from around the world to answer what’s, for us, the most pressing question of all:
What emerging security challenge do you believe organizations will underestimate most in 2026 as AI-driven systems and autonomous agents become increasingly interconnected?
Cybersecurity predictions are a dime a dozen. That’s why we’ve curated a list that will actually help organizations stay secure in the year to come.
Autonomous Agents are the New Insider Threat
Jane Frankland, a renowned cybersecurity author, speaker, and entrepreneur, highlights that many organisations will underestimate how autonomous agents will become machine-to-machine supply-chain risks that make critical decisions without human visibility.
However, she also points to what she sees as a greater threat: how these same agents will enable personalised, persistent behavioural steering inside the organization.
“Instead of phishing, attackers will use AI agents with near-perfect psychological profiling to manipulate workflows, shape internal narratives, and subtly redirect human decisions, creating governance failures that traditional security controls were never designed to detect,” she said.
Panagiotis Soulos, Deputy CISO for Viohalco Companies, echoes this sentiment, describing how compromised AI agents can act as “potent insider threats.”
“These systems gain privileged access across interconnected environments but lack adequate governance and monitoring. The rapid increase of enterprises scaling agents, without adequate protections, and the huge number of these agents introduce a new attack vector that, if compromised, can execute attacks at machine speed, like data exfiltration and privilege escalation without human oversight,” he said.
Failures Will Cascade Throughout Agent Ecosystems
As AI systems begin chaining tasks across multiple models, tools, and services, the risk surface expands dramatically. However, according to Andrew Storms, VP of Security at Replicated, many organizations still think in terms of isolated components.
“Organizations will critically underestimate adversarial cascades in autonomous AI agent ecosystems,“ he said. A single compromised instruction can ripple outward, “with each step amplifying or redirecting the malicious intent in ways that existing security controls and accountability structures aren’t designed to detect, contain, or attribute to a responsible party.”
For Andrew, the real danger isn’t a flaw in a single model or tool. It’s the emergent attack surface created when autonomous agents interact across trust boundaries with minimal human oversight.
Jeremy Dodson, Founder of Piqued Solutions, takes this idea even further.
Jeremy argues that organizations should be concerned about what happens when a compromised agent quietly persuades dozens of other agents, each with real privileges, to act on its behalf. The real damage comes from the agents that “behaved correctly” while following poisoned instructions. That’s the mindset gap most orgs will carry into 2026.
“Finding orgs that blindly trust MCP, or connected (anything really), have been how we have been able to poison instruction during our testing. If you think about it, old techniques (man-in-the-middle), ‘new’ technology,” he said.
AI Will Make Us Complacent and Overly Trusting
For Desiree Michelle, Head Information Security Architect at Quantum Mergers, AI agents themselves aren’t the threat; it’s how we use them.
“When advancing technologies make us overly trusting, and we forget the real threats, our reliance on tools becomes too heavy; we lose the vigilance that keeps us resilient. Proactive security must remain constant and work in conjunction with these systems,” she said.
That excessive trust can also erode the quality of security decisions.
Kathleen Moriarty, Founder of SecurityBias, sees the same pattern from a different angle. She notes that while AI accelerates work, it often removes the nuance, interpretation, and contextual insight that security teams rely on. “There is no joy in AI generated responses,” she said, pointing out that they lack the synthesis and deeper understanding humans bring when evaluating threats.
Kathleen foresees a correction coming. She argues that organizations will realize that certain tasks – especially those requiring interpretation, behavioral context, or analytical depth – can’t be automated without weakening outcomes.
Fundamentals Will Still Decide Security Outcomes
Bob Clinton, Field CTO at Driven.Tech, reminds us that regardless of how sophisticated AI becomes, cybersecurity fundamentals still matter:
“Volatility and velocity of change will challenge organizations that aren’t brilliant in the basics. Fundamental cybersecurity functions like visibility, governance, and identity management are where defenders will win or lose in 2026,” he said.
AI Will Accelerate Basic Infrastructure Weaknesses
Our very own VP of Product, Tim Erlin, warns that while AI brings exciting new capabilities, organizations often overlook the foundations on which they run.
“The reality is that generative AI and AI agents sit atop a mountain of older infrastructure. Just as we’ve seen the outages at AWS and Cloudflare traced back to things like DNS and unmanaged file size, the security challenges organizations face can often be attributed to well understood vulnerabilities like unauthenticated APIs and SQL injection,” he said.
“When you combine the usefulness of AI with the success of less sophisticated exploits, organizations are going to underestimate how AI-assisted attackers will step up their basic attacks.”
Organizations Will Underestimate AI-Powered Defenses
Finally, Thomas Garcia, Cybersecurity Manager, stresses that AI and autonomous systems aren’t the enemy. The biggest threat of an AI-driven attack is the absence of AI in organizations’ defenses.
“Using intelligent security measures, such as an AI-integrated Zero Trust that adjusts with real-time behavioral data will help shore up traditional security framework,” he said.
Thomas argues that authentication remains the most significant threat to API security. ”Just because I give a key to my home to someone doesn‘t mean my house is secure as long as they lock the door behind them. I need a system to analyze why they are there, what they are doing and restrict or lock out areas completely when there’s a threat,” he said.
Looking Ahead: Preparing for 2026
Agentic AI presents one of the biggest challenges for organizations in 2026. The coming year will reward the organizations that combine vigilance, governance, and intelligent adoption of AI.
While preparing for the future is important, it is equally essential to understand the past. For Wallarm, 2025 has been a huge year. It was the year we turned insight into action and visibility into measurable business impact. To see how we did that, check out our 2025 year in review blog.
