Most organizations view AI identities through the same lens used for other non-human identities, such as service accounts, API keys, and chatbots, according to The State of Non-Human Identity and AI Security report by the Cloud Security Alliance.
AI identities inherit old IAM weaknesses
Treating AI identities as another category of non-human identity means they inherit the same weaknesses that have affected identity programs for years. Credential sprawl, unclear ownership, and uneven lifecycle controls already pose challenges at scale. AI systems increase the number of identities in circulation and shorten the time between creation and use, placing additional stress on these controls.
Many identity programs rely on models built for slower and more predictable systems. AI identities are created programmatically, distributed across environments, and used continuously, increasing the number of credentials that require tracking and review.
Risk management often centers on access mechanisms, with limited visibility into how AI systems behave once access is granted.
Policy doesn’t keep up with automation
In many organizations, AI identities fall into a gray area. Defined rules for how they are created, managed, and retired are often missing, and teams handle them differently depending on the system or use case.
Automation provides limited relief. AI identities are still created and removed through processes that include manual steps, making consistency difficult to maintain as AI systems begin generating access on a regular basis. No single team consistently owns an AI identity throughout its lifecycle, and permissions tend to accumulate over time.
When an issue occurs or an alert triggers, security teams may spend valuable time determining ownership before they can act. The result is a growing set of identities with broad access and limited oversight, which becomes increasingly difficult to manage as AI systems expand across the environment.
“Organizations with limited visibility and unclear ownership are feeling the strain of AI-driven identities and securing identities in the AI era. Establishing strong identity foundations now is critical to reducing risk and confidently scaling AI use” said Hillary Baron, AVP of Research, Cloud Security Alliance.
Legacy IAM meets continuous identity creation
Most identity and access tools were built for human users and long-lived service accounts. They struggle to scale as AI systems create and use identities continuously.
Security teams report limited confidence in their ability to control non-human identities at scale. Legacy IAM platforms depend on manual reviews, exception handling, and ticket-based workflows, which slow oversight and leave many AI-generated identities outside established governance paths.
Non-human identities tied to AI workloads are often treated as exceptions. They bypass access reviews and certification cycles, reducing visibility into where credentials exist and what resources they can reach.
This gap between AI-driven activity and identity controls forces teams into a reactive posture, addressing risk only after access has already been granted.
The blind spots around AI credentials
Weaknesses in legacy IAM tools and governance are most visible in how organizations manage the credentials behind AI systems. Teams often lack a reliable way to detect when new AI-related identities or tokens are created, allowing credentials from short-term projects or experiments to persist.
When a credential is exposed or no longer needed, rotation or revocation frequently lags. Security teams may spend hours or days identifying where a token is used, who owns it, and which systems depend on it. During that time, the credential remains active.
Reviewing, rotating, and auditing non-human identities consumes a consistent share of staff time each month, further straining security operations.
