The AI safety conversation is focused on the wrong layer
Organizations have spent years accumulating fragmented identity systems: too many roles, too many credentials, too many disconnected tools. For a workforce of humans, that fragmentation was manageable. Humans log in, log out, and make decisions slowly enough that gaps in control rarely turned into immediate incidents. AI agents operate differently.

“AI agents change that completely,” said Ev Kontsevoy, CEO of Teleport. “Now you’re introducing non-deterministic actors that don’t sleep, don’t follow predictable paths, and can move across your infrastructure in seconds. And in most environments, we’re plugging them into the exact same model we already struggle to manage, with static credentials, fragmented identity, and over-scoped access, and very little real-time visibility into what they’re actually doing.”
Kontsevoy argues that identity sprawl has been misdiagnosed as a scaling problem. The underlying issue is control, and specifically, the absence of identity as a consistent control plane across infrastructure.
“That’s the point where identity sprawl stops being something you can clean up later and starts becoming something you can’t control at all,” he said. “If you can’t answer, in real time, what an identity is, how it is verified, and what it’s doing, you’ve already lost the thread.”
The building blocks exist. Consistent application does not.
Human identity management developed over decades, producing standards like SAML and OAuth that now underpin enterprise authentication broadly. Non-human identity lacks that consistency, not because the technical primitives are absent, but because their application is uneven.
Short-lived cryptographic credentials tied to verifiable identity, and policy-governed enforcement, are both available. The problem is that every platform, every cloud provider, and every tool implements them differently, producing the same kind of fragmentation that accumulated with human identity, but at larger scale and higher velocity.
“What a real stack should look like is much simpler,” Kontsevoy said. “At its core, it requires a unified identity layer that treats every actor — human, machine, or AI agent — as a first-class identity. Every non-human identity should be strongly tied to something verifiable, whether that’s a workload, a device, or an agent. Access should be short-lived, continuously validated, and constrained to what that identity is authorized to do, based on policy, nothing more.”
Kontsevoy described that architecture as technically feasible and said it is implemented in Teleport. The broader obstacle is conceptual: most organizations still treat identity as something added after infrastructure is built, rather than as infrastructure itself.
Regulators are mapping new behavior onto old accountability models
Regulated industries including finance, healthcare, and critical infrastructure are deploying agentic AI at a pace that regulatory frameworks have not matched. Existing accountability models assume that a human is ultimately responsible for any given decision, and that decisions are traceable to a linear chain of approvals.
Agentic systems break that assumption. They can take actions, chain decisions together, and produce outcomes that are difficult to explain after the fact. There may be no single decision point, and in some cases no direct human actor at all.
“Regulators are starting to recognize this, but they’re still early,” Kontsevoy said. “Most of the current frameworks focus on governance, risk classification, and documentation. That’s necessary, but it doesn’t solve the core problem, which is operational accountability.”
He said operational accountability in agentic environments will ultimately depend on control over identity and the policies governing it. Organizations that can demonstrate, in real time, that every action was tied to a verified identity operating under enforced policy will be better positioned to satisfy regulatory scrutiny than those that can show policy documentation alone.
Three steps for CISOs, and one set of habits to drop
For security leaders beginning to address non-human identity, Kontsevoy outlined a sequence of three actions.
The first is to establish identity as the control plane across the entire infrastructure. “Not by ripping and replacing everything,” he said, “but by making identity the control plane across your infrastructure. Every human, machine, workload, and AI agent should operate as a first-class identity within the same system.”
The second is to eliminate static, long-lived credentials. “Static keys, shared secrets, anything that sits around waiting to be used, that model doesn’t hold up once you introduce agents operating continuously. Everything should be short-lived, issued on demand, and issued dynamically and tied to a cryptographically verifiable identity.”
The third is to use the visibility gained from the first two steps to continuously harden the environment. Without a complete picture of what identities exist, including service accounts, workloads, and tokens, security teams are making access decisions without adequate information.
On the actions to stop: “Stop creating new service accounts as a shortcut. Stop embedding credentials into scripts and workflows. Stop assuming that because something is ‘internal,’ it’s safe. Those habits were already risky, but with AI in the mix, they scale in ways that are very hard to unwind.”
Model safety discussions are missing the bigger question
Much of the public conversation about AI risk focuses on model behavior: hallucination, alignment, output quality. Kontsevoy said the more consequential risk in enterprise deployments sits in the identity and authorization layer, not in the models themselves.
“If a model gives a bad answer, that’s usually recoverable,” he said. “If an agent with the wrong level of access takes the wrong action, that’s where you see real impact. Identity determines whether a mistake becomes an incident.”
He described many of the AI risks that enterprises are concerned about as familiar security problems appearing in a new form. Fragmented identity, static credentials, and over-scoped permissions are not new phenomena. The difference is that AI systems can exercise that access continuously and at machine speed.
“The question isn’t just whether the model is safe. It’s whether the identity behind it is continuously verified and constrained by policy.”
In most enterprise environments, agents are connected to existing systems with broad access because speed of deployment takes priority over access hygiene. That approach inherits all of the identity fragmentation and credential risks already present in those environments. “If you get identity right, you reduce most of the real risk,” Kontsevoy said. “If you don’t, AI will simply amplify every weakness that’s already there.”