Experts unpack the biggest cybersecurity surprises of 2025

Experts unpack the biggest cybersecurity surprises of 2025

2025 has been a busy year for cybersecurity. From unexpected attacks to new tactics by threat groups, a lot has caught experts off guard. We asked cybersecurity leaders to share the biggest surprises they’ve seen so far this year and what those surprises might mean for the rest of us.

Chris Acevedo, Principal Consultant, Optiv

The biggest cybersecurity surprise of 2025 has been the speed and sophistication of AI-powered Business Email Compromise, specifically the pivot away from email alone.

We’ve seen attackers evolve from phishing emails to full-spectrum impersonation: AI-generated voices and even deepfake videos used in live calls or voicemails to impersonate executives. In one case, a client’s finance lead received a Teams voice message (seemingly from the CFO) urgently requesting a funds transfer. The tone, cadence, and verbal ticks were eerily accurate. Only after a manual call-back to the actual CFO was the fraud caught in time.

What makes this so dangerous is how legitimate it sounds and feels. These deepfakes are not just technically impressive, but they exploit the trust and urgency built into human relationships and executive communication.

The lesson for CISOs? Email filtering and MFA are no longer enough. Security leaders need to think in terms of human verification workflows, especially for high-risk transactions. We recommend building “out-of-band” confirmation steps into processes that handle sensitive approvals (voice verification, secondary sign-offs, secure messaging apps).

Training and awareness also need to go beyond phishing simulations. Employees must be prepared to question what they see and hear, even if it looks or sounds exactly like their leadership.

Attackers have always followed where trust lives. In 2025, they’ve learned to speak it.

cybersecurity surprises

Iftach Ian Amit, CEO, Gomboc.ai

My biggest surprise in 2025 is the fact that generative AI coding is still being perceived as a solution that can be adopted as-is by developers and engineers. I have witnessed multiple attempts to incorporate coding assistants and vibe coding at an enterprise level that goes beyond simple prototyping, or creating a “v1” application of sorts, and the number of security holes introduced through those is astonishing.

The fact that there isn’t a mature practice that complements vibe coding with an “alignment” mechanism that is provably accurate, repeatable, and deterministic, is actually holding back the widespread adoption of AI through the engineering practices.

Jim Alkove

Jim Alkove, CEO of Oleria

I’ve been pleasantly surprised by the growing recognition over the last 12 months that identity is a critical foundation both for cybersecurity and business enablement. So, it’s all the more surprising that some of the biggest breaches still stemmed from fundamental identity security gaps even within sophisticated and well-respected organizations.

Case in point: the recent UNC6040 attacks targeting Salesforce customers. Despite widespread MFA adoption, attackers succeeded because organizations still relied on weak authentication methods such as SMS codes and push notifications instead of phishing-resistant FIDO keys.

My advice: Every organization needs an honest identity assessment. You cannot manage unseen access risk, and this foundational step enables everything from AI governance to incident response.

I’m also surprised that we haven’t seen more major AI-associated breaches. But I’m no less convinced that a wave of serious AI breaches is coming. Three years post-ChatGPT, AI adoption outpaces security controls by a dangerous margin. We’re seeing organizations deploy AI copilots and agents without establishing identity governance around these new vectors, which introduces significant security risks.

Finally, I’m surprised more people aren’t talking about how the oncoming wave of agentic AI will overwhelm our conventional identity frameworks. We’re talking about fully autonomous agents that act in non-deterministic ways that don’t fit any of our current paradigms for machine identities and other non-human identities (NHIs).

cybersecurity surprises

Abhay Bhargav, Chief Research Officer, AppSecEngineer

The biggest cybersecurity surprise of 2025, for me, was the publication and almost overnight adoption of the Model Context Protocol (MCP), Anthropic’s bid to give AI agents a universal “USB-C–style” connector for sharing context with any large language model. On paper it looked like the breakthrough layer we’ve long wanted: a drop-in interface so developers could snap together orchestration frameworks, memory stores, and reasoning engines without bespoke adapters. It lived up to the expectation. It is super easy to adopt and has become the defacto standard.

In practice, the real shock was how severely the specification and implementation seems to have completely ignored security. Other than authentication, which only supported OIDC, there seems to be no focus on implementing strong authorization and access control.

Implementations sorely lack in security logging, and there are no mandates towards that as well.

The isolation model is really poor in terms of security simply because you can easily start task poisoning and perform a whole bunch of attacks that can hijack other MCP servers installed on the same system.

In terms of supply chain, there are really no good ways to enforce supply chain security requirements like integrity and provenance. Those controls are completely missing.

It was disappointing to see the AI community, dazzled by interoperability, rush MCP into critical workflows without first demanding a threat model. Watching well-funded teams in 2025 repeat the “secure it later” pattern that haunted Web 2.0 felt like déjà vu at hyperspeed. It’s a reminder that convenience still triumphs over security unless practitioners push back early and loudly.


Source link