For the past two decades, cybersecurity has largely been a story about protecting humans from machines blocking malware, filtering phishing emails, companies mitigating DDoS attacks, and patching software vulnerabilities before attackers exploit them. The adversary was clear. The surface was known. The playbook, while imperfect, was at least legible, but that story is now changing.
The next major frontier in cybersecurity is not defending against AI. It is figuring out how to trust it.
The Agent Is Already In the Building
Autonomous AI agents are being deployed today reading inboxes, executing code, transferring funds, signing off on contracts, and making decisions that in any previous era would have required a human signature. The agentic economy is not on the horizon. It is already operating inside your perimeter.
The speed of adoption is understandable, and the productivity case is compelling. A single AI agent can compress weeks of analyst work into hours. But here is the question most organizations are not yet asking: when an AI agent takes an action on your behalf, how do you actually know it is who it claims to be?
A Trust Problem Hiding in Plain Sight
In traditional network security, identity is foundational, and zero-trust architectures exist precisely because we learned that presence inside a network is not proof of legitimacy. We authenticate users, verify devices, enforce least-privilege access controls, log and audit everything. None of that infrastructure was built with AI agents in mind.
Today, when an autonomous AI agent initiates a request to an API, a database, a financial system, or another agent, the receiving party typically has no reliable mechanism to verify its identity, confirm what it is authorized to do, check whether its instructions have been tampered with, or revoke its access in real time. The agent arrives as a stranger, and most systems simply let it in.
This is not a theoretical vulnerability. It is a systematic gap that is widening every month as agent deployments scale. And it is the kind of gap that malicious threat actors have historically been very good at exploiting.
Why Verification Is Harder Than It Looks
The challenge of verifying AI agents is not simply about adding a layer of authentication. It is structurally different from verifying humans or conventional software, for several reasons.
First, agents are dynamic. Unlike a static application with a fixed set of behaviors, an AI agent’s capabilities and actions can shift based on context, instructions, and the models powering them. Verifying that an agent is “safe” at deployment tells you very little about what it might do an hour later.
Second, agents operate in chains. Modern AI workflows involve multi-agent pipelines where one agent delegates tasks to another, which delegates to another still. Each hand-off is a potential point of spoofing, injection, or scope creep. Verifying the first agent in a chain is not enough if you cannot verify what it passes downstream.
Third, agents interact across organizational boundaries. An AI agent operating on behalf of your company may be communicating with agents controlled by your vendors, your customers, or your cloud infrastructure providers. There is no shared trust framework for cross-organizational agent interactions yet.
Fourth, the attack surface includes the instructions themselves. Prompt injection attacks where malicious content embedded in external data hijacks an agent’s behavior are already being used in the wild. Verification is not just about who the agent is. It is about whether what the agent has been told to do has been tampered with.
The Industry Is Starting to Act
The emergence of frameworks and programs designed to address this gap is a signal that the security community understands what is at stake.
Anthropic’s Cyber Verification Program (CVP) is one early indicator of where the industry needs to go. By establishing a framework for verifying legitimate cybersecurity operators working with Claude’s infrastructure, including dual-use tooling and offensive security research, Anthropic is acknowledging something important: security in the AI era requires active verification, not passive assumption.
Lyrie.ai was part of the first group of companies accepted into the CVP, highlighting its early focus on building security tools for AI agents and autonomous systems. The company’s inclusion also reflects a growing view across the industry: securing AI systems requires platforms built for how modern AI actually operates, instead of adapting older security models that were never designed for it.
But the CVP is a starting point, not a destination. What the industry needs is not just verification programs for individual operators. It needs open, interoperable standards that make agent verification a first-class primitive across the entire ecosystem.
What a Standard Might Look Like
A cryptographic standard for AI agent verification would need to address, at a minimum, five questions: Who is this agent? What is it authorized to do? Has it or have its instructions been tampered with? Who delegated authority to it, and through what chain? And can that authority be revoked in real time if something goes wrong?
These are not novel concepts in security. They map closely to what we do for code signing, certificate authorities, and identity federation. The challenge is adapting them to the specific properties of AI agents, their dynamism, their delegation patterns, and their susceptibility to instruction manipulation.
Lyrie’s research team has published the Agent Trust Protocol (ATP), an open cryptographic standard addressing exactly these primitives. It is royalty-free and submitted for consideration by the Internet Engineering Task Force (IETF).
Whether ATP or a competing proposal ultimately becomes the standard matters less than the broader point: the conversation about standards needs to happen now, before the deployment curve makes retrofitting prohibitively difficult.
The history of the internet offers a cautionary tale. Email was built without authentication. Decades later, we are still fighting spam, phishing, and spoofing at scale because trust was an afterthought. We cannot afford to make the same mistake with AI agents.
What Security Teams Should Be Asking Today
Organizations that are already deploying or planning to deploy autonomous AI agents should be asking hard questions of themselves and their vendors:
How are agent identities established and maintained? What access controls govern agent behavior, and how are they enforced at runtime rather than just at configuration time? How are multi-agent delegation chains audited? What happens when an agent behaves unexpectedly, and how quickly can authority be revoked? Is the AI infrastructure your agents operate on subject to independent security verification?
If the answers are vague, that is not a sign of an immature vendor. It may be a sign that the right questions are only now becoming mainstream. The organizations that get ahead of this will have a significant advantage not just in security posture, but in the trust their customers, partners, and regulators place in their AI systems.
The Verification Layer Is the Next Battlefield
Cybersecurity has always evolved to meet new threats. The cloud era produced a new generation of identity and access management. The mobile era produced endpoint management and zero-trust networking. The IoT era is still, frankly, a mess largely because standards and verification came too late.
AI agents are being adopted at a pace few technologies have matched. Companies are rushing to deploy them because the productivity gains are real, while attackers are already looking for ways to exploit weak points in these systems.
What comes next is becoming harder to ignore. Verification and identity checks for AI agents are expected to become a standard part of enterprise security. The bigger concern is whether the industry can put the right standards and safeguards in place before a major incident forces the issue.
Right now, most AI agents interacting online operate without a trusted identity framework. That leaves businesses with limited ways to verify who or what they are communicating with. Fixing that gap is quickly becoming one of the next major security problems the industry has to solve.

