Your IAM was built for humans, AI agents don’t care

Identity and access management was built for a simpler world. One where the hardest problem was a human logging in, and where “Who are you?” was sufficient to decide what someone could do. That model served enterprises well for decades.

It was not built for a world where non-human identities now account for more than 90% of all authentications, where AI agents act across systems, trigger chains of API calls, and make access decisions in milliseconds without a human in the loop. The assumptions that made traditional IAM reliable are exactly what make it poorly suited for what enterprises are deploying today.

What’s needed isn’t a new product bolted onto an existing architecture. It’s a different mental model, one that focuses on the application rather than the user and treats authorization not as a gate passed once but as a continuous process evaluated at every step.

Authentication is a moment. Authorization is a process

The core assumption baked into most IAM platforms is that access is a gate you pass through once. Authenticate, receive a token, proceed. The system trusts that whoever passed through the gate at the start remains the same entity performing the same action at step twelve of a multi-agent workflow. For humans, that assumption is mostly fine. For agents, it is a structural vulnerability.

Most IAM systems are built around a single question: Is this user allowed to perform this action? It’s the right question for a login screen. It’s the wrong question for an API chain where the user isn’t directly present. Because, technically, users never make requests on their own, it’s always an application making requests on their behalf. When authorization decisions are based solely on user permissions, the system has no way to understand how a request is made or whether it’s expected in that context.

When an AI agent acts on behalf of a user, there are immediately two identities in play: who the agent is, and who the user is. Most IAM systems were built to track one. The agent acts with the user’s permissions, often broad ones provisioned in advance, and the system has no visibility into what the agent is actually doing at each step of the chain. That’s not a configuration problem. It’s an architectural one.

The identity sprawl trap

The instinct in the IAM market has been to solve this by treating agents as a new type of identity. Register the agent. Give it a profile. Manage its lifecycle. Run access reviews. Decommission it when it’s no longer needed. This mirrors how enterprises handle human identities, which is exactly the problem.

AI agents are not employees. An employee joins, works for some years, and leaves. An MCP client connects, executes a task, and disconnects. In a multi-agent system, agents spawn other agents dynamically. Some exist for seconds. Applying directory-based lifecycle management to these entities doesn’t solve the security problem, it creates a new one. More entries to provision. More orphaned identities to clean up. More access reviews on entities that no longer exist. It’s identity sprawl at machine speed.

The question is not who the agent is. The question is what it’s allowed to do right now, in this context, and on whose behalf.

What runtime authorization actually looks like

A more useful starting point is the application, not the user. Applications exist to fulfill a purpose. They interact with APIs, some on behalf of users and others independently. An application-centric model decouples what the application is permitted to do from what the user is permitted to do, letting you define precise rules for how one application may act on a user’s behalf versus another.

This matters enormously for AI. Agents require just-in-time, least-privilege access, scoped to the specific action, the specific data, and the specific moment. The mechanism for conveying that context is the access token. Not as a simple credential that proves you’re allowed in, but as a carrier of context: who is acting, who they represent, what they’re trying to do, and how much they should be trusted right now. APIs need that information to make the right access decision. If the token doesn’t carry it, the API has no basis for enforcement.

An agent should authenticate and receive a token scoped exactly to the action it needs to perform. When the task is complete, that authorization disappears. No directory entry. No lifecycle to manage. No standing access to compromise.

The architecture your APIs already speak

None of this requires replacing existing infrastructure. The standards that make this work, OAuth 2.0, token exchange, and dynamic and ephemeral client registration, are the same standards enterprise APIs already implement. The issue isn’t that the right primitives don’t exist. It’s that they’re not being applied at the layer where agents operate, and they are usually not built with enough dynamism to handle this.

The good news is that if your APIs already speak OAuth, you don’t need to wait long to get this right. The primitives are already there: token exchange, flexible client access, and short-lived scoped tokens. You’re not rebuilding from scratch. You’re applying what you already have more precisely, at the layer where agents actually operate.

The window for getting this right is closing

Enterprises are not waiting for security teams to figure out agent governance before deploying agents. The deployments are happening now, running on IAM infrastructure that wasn’t designed for non-human identities, tokens that are too broad, permissions not scoped to individual tasks, and no visibility into what agents are doing across API chains.

The question isn’t whether your IAM platform supports AI agents. Most vendors will say yes. The question is whether it governs what agents actually do at runtime or just authenticates them at the door and hopes for the best. That’s the gap Curity Access Intelligence is designed to close.

Authentication was always a moment. The part that matters now is everything that comes after.

Don't miss