AI agents behave like users, but don’t follow the same rules

Security and governance approaches to autonomous AI agents rely on static credentials, inconsistent controls, and limited visibility. Securing these agents requires the same rigor and traceability applied to human users, according to Cloud Security Alliance’s Securing Autonomous AI Agents report.

securing AI agents rules

Agents scale faster than governance frameworks

Autonomous AI agents act on behalf of humans, accessing data and making decisions with business impact. Organizations are deploying them across production environments, pilots, tests, and broader AI or automation initiatives. As a result, agents operate across multiple environments, expanding the agentic workforce without corresponding governance and IAM controls.

“The agentic workforce is scaling faster than identity and security frameworks can adapt. Success in the agentic era will hinge on treating agent identity with the same rigor historically reserved for human users, enabling secure autonomy at enterprise scale,” said Hillary Baron, AVP of Research, Cloud Security Alliance.

Confidence in existing IAM tools to manage agent identity remains low, showing that identity architectures optimized for humans are not ready to govern autonomous agents.

Responsibility for managing agent identities is also undefined. Security, IT, DevOps, IAM, GRC, and emerging AI security teams often share accountability, creating gaps in oversight and inconsistent policy enforcement.

Respondents also express uncertainty about their ability to pass compliance audits related to AI agent activity and access controls. Governance frequently relies on informal practices rather than defined frameworks. As a result, enterprises risk deploying capable agents into environments where rules for identity, accountability, and authorization are still undefined.

Outdated credentials and fragmented access controls

Despite integrating AI agents into production, many organizations continue to rely on credentialing and access patterns not designed for autonomous systems. API keys, usernames and passwords, and shared service accounts remain common, while approaches such as OIDC, OAuth PKCE, or SPIFFE/SVID workload identities are less widely adopted. This reflects uncertainty over whether AI agents should be treated as machine identities, human equivalents, or something in between.

Fragmentation is further reinforced by authorization models built for human users and application access rather than continuously operating agents. Runtime access controls are inconsistent, guardrail adoption remains limited, and secrets management, session recording, and audit logging are far from universal.

These gaps leave organizations without continuous control over agent behavior once credentials are issued. Static credentials and periodic policy checks cannot support the continuous authentication and context-aware authorization required for autonomous agents, making it difficult to determine which agent acted, under what conditions, and on whose behalf.

Limited visibility and weak traceability

Even as AI agent usage expands, most organizations lack the visibility required to manage them safely. Agent registries are fragmented across identity providers, custom databases, internal service registries, and third-party platforms. Rather than deploying purpose-built systems for agent discovery and governance, organizations are retrofitting existing tools, resulting in partial, delayed, and siloed visibility.

Traceability and monitoring are similarly inconsistent. Companies often cannot determine what agents did, what they accessed, under which authorization, or on whose request.

Respondents indicated that actions such as accessing sensitive data, making system changes, approving financial transactions, and granting permissions still require human oversight. This reflects limited trust in agents operating autonomously in high-stakes scenarios and that agent governance has yet to reach continuous, auditable maturity.

Rising awareness and investment in agent identity

Security and governance gaps are becoming more visible, prompting enterprises to increase identity and security budgets to accommodate AI agents. Agent identity is beginning to emerge as a distinct, funded component of enterprise security architectures.

When asked about their top concerns, respondents cited sensitive data exposure, unauthorized or unintended actions, limited expertise, credential misuse or over-provisioning, lack of agent identity standards, difficulty discovering or registering agents, integration challenges with legacy systems or APIs, and insufficient awareness or training.

Don't miss