ComputerWeekly

AI agents are here. Are we ready for the security implications?


We’re living through a genuinely groundbreaking moment in technology. Every week brings new breakthroughs in AI agents – capabilities that seemed impossible just months ago are now becoming reality. Organisations are rushing to adopt them, and they’re right to.

But there are important security considerations beneath the enthusiasm. According to our research, at Okta, 91% of organisations are now adopting AI agents, yet only 10% have governance strategies in place. Closing this gap will require intentional focus and effort.

The reason comes down to something more fundamental than most people realise. We’re shifting from one architectural model to something fundamentally different and we haven’t fully reckoned with what that means for security.

When applications stop following the script

For decades, we’ve built applications that operate within predictable boundaries. Think of a travel booking application. You navigate defined screens and execute a transaction. What’s possible is finite. Security works because users move through guarded corridors deep inside the application’s logic.

But AI agents operate differently. They’re conversational. They accept natural language input from anywhere and make autonomous decisions we can’t entirely predict. The access point isn’t buried in application code anymore. It’s right there at the front end, in the conversation itself.

This is an architectural shift, and it means the security controls we’ve relied on are now being tested in ways we’re only beginning to understand.

Security at the frontline

This shift exposes internal APIs and data surfaces in ways traditional applications never did. When you compromise a deterministic application, damage is typically contained. But when you compromise an AI agent, you’re looking at potential access across your entire infrastructure and actions that ripple in unpredictable ways.

What used to be hypothetical is now happening, and the complexity compounds when agents work together. We’re moving beyond single agents to agent-to-agent communications. That introduces permission and identity challenges we’ve genuinely never had to think about before.

Rethinking identity in an AI-driven world

80% of breaches today involve compromised identity or credentials, which remains a key attack surface for threat actors. But, solving this in an agent-driven world requires thinking about identity differently.

For developers and organisations deploying agents, four identity requirements have become non-negotiable:

  • First, genuine agent and user authentication. You must securely link each agent’s actions back to the human user who authorised them.
  • Second, standardised, secure API access. Agents connect to dozens of applications. Those connections need hardening against token leakage and credential compromise.
  • Third, human validation in the loop for anything high-risk or sensitive. This isn’t about lack of faith in AI; it’s about maintaining human agency while these systems mature.
  • Fourth, fine-grained permissions. An agent should access only the data it needs, only for the time it needs it, with every action logged and auditable.

Learning from past mistakes

I’ve watched this pattern before with cloud, APIs, and microservices. Security considerations often come in later in the development of new architectural models, not earlier.

We’re seeing it again with agent protocols. MCP, agent-to-agent frameworks, and cross-app access standards are developing rapidly with genuine effort to embed security from the start. But security still feels like it’s catching up rather than leading design.

The practical reality is that you can’t wait for perfect standards. You need to implement governance with available frameworks today, while remaining flexible to adapt as standards mature.

What leaders must do now

Business leaders face real pressure to unlock AI’s potential and genuine concerns about security. These aren’t mutually exclusive. Here’s what needs to happen.

  • Complete visibility into every agent running in your environment and what it’s doing. No shadow agents. No hidden permissions.
  • Apply identity and permission strategies with the same rigour you’d use for human users.
  • Ensure agents connect through secure, auditable channels. Whether building customer agents or using MCP servers, the same principles apply.
  • Finally, log everything. Agent activity will operate at a scale that might surprise you but if every action is captured, you’ll meet regulatory requirements and investigate incidents quickly.

Be proactive, not reactive

Breaches linked to agents are happening now and will continue to happen. That’s not a reason to slow AI adoption – it’s a reason to be serious about security from the start.

The encouraging part is that the foundational principles we’ve relied on – identity governance, least-privilege access, encryption, comprehensive auditing – still work. In fact, they’re more important than ever. We just need to scale them intelligently for this non-deterministic world.

The technology exists and the frameworks are emerging. What matters now is whether we approach this thoughtfully or spend the next couple of years managing preventable incidents.

I’m betting we’re smarter than that.

Shiv Ramji, is Auth0 President at Okta



Source link