AI is here, security still isn’t

AI is here, security still isn’t

Although 79% of organizations are already running AI in production, only 6% have put in place a comprehensive security strategy designed specifically for AI. As a result, most enterprises remain exposed to threats they are not yet prepared to detect or respond to, according to the SandboxAQ AI Security Benchmark Report.

AI risks raise alarm among security leaders

The report, based on a survey of 102 senior security leaders across the US and EU, underscores widespread concern about the risks posed by AI, from model manipulation and data leakage to adversarial attacks and the exploitation of non-human identities (NHIs).

Despite growing unease among CISOs, just 28% of organizations have carried out a full security assessment focused on AI. Most still depend on traditional, rule-based tools that were never built to protect machine-driven systems.

Key findings reveal major gaps in AI security readiness

  • Only 6% of organizations have implemented AI-native security protections across both ITand AI systems.
  • 74% of security leaders are highly concerned about AI-enhanced cyberattacks, and 69% are highly concerned about AI uncovering new vulnerabilities in their environments.
  • Just 10% of companies have a dedicated AI security team; in most organizations, responsibility falls to traditional IT or security teams.

“Most organizations aren’t measuring AI security in any meaningful way because the foundations just aren’t there yet,” Marc Manzano, General Manager of the Cybersecurity Group at SandboxAQ, told Help Net Security. “In the report, fewer than 30% of security leaders said they’ve assessed the risk of their AI deployments. Only 10% said they had a dedicated AI security team, and just 6% said they’ve implemented any kind of AI security controls. That tells me there’s still a fundamental and critical gap between AI adoption and security readiness.”

“What we’re starting to see from more mature teams is a shift away from trying to retrofit legacy controls. Instead, these teams are taking first steps toward evaluating risks that actually reflect how AI systems behave in production and the blast radius that such systems can have if breached or went rogue,” Manzano continued. “That includes implementing observability and monitoring capabilities of non-human identities and cryptographic assets leveraged by AI workflows.”

He also warned of a troubling trend: “In practice, what we are seeing is that some large organizations are actually downgrading the data security policies they have put in place over the past decade so that they can enable AI use cases that require large amounts of information to function. This phenomenon is not isolated and it deeply troubles me. We cybersecurity professionals have a responsibility and need to step up and build cybersecurity solutions that can keep pace with fast AI adoption. This is just getting started and I believe we are still on time to catch up, but we don’t have time to lose.”

Non-human identities pose new governance challenges

The growing presence of non-human identities, such as autonomous AI agents, services, and machine accounts, is adding a new layer of complexity to the security landscape. These entities often operate without human oversight, using cryptographic credentials to access sensitive resources and interact with other systems. However, most security teams have limited visibility into their actions and little control over their behavior. This lack of oversight weakens core zero trust principles and exposes critical gaps in identity governance and cryptographic hygiene.

Budgets and priorities shift toward AI protection

Even with these security gaps, investment in AI protection is gaining momentum. Eighty-five percent of organizations plan to boost their AI security budgets over the next 12 to 24 months, and one in four expect to make substantial increases. Key priorities include safeguarding training data and inference pipelines, securing non-human identities, and implementing automated incident response tools designed for AI-driven environments.


Source link