As AI agents start making purchases, security teams must rethink risk
In this Help Net Security interview, Donald Kossmann, CTO at fintech company Chargebacks911, talks about the emerging security, fraud, and governance risks of “agentic commerce,” where AI agents can autonomously make purchasing decisions on behalf of users or organizations.
He explains that as AI agents gain the ability to shop, negotiate prices, select suppliers, and execute transactions independently, traditional assumptions about digital commerce begin to break down.

What is the most underappreciated security assumption people are making about agentic commerce right now?
The most underappreciated assumption is that a technically authorized transaction will always reflect genuine user intent. That has largely been true in click-based commerce, but agentic systems weaken that link.
When an AI agent is allowed to act persistently on a customer’s behalf, the risk shifts from credential theft to intent drift. The agent may operate within its permissions but still produce outcomes the user does not expect or accept. From a payments and dispute perspective, that creates a gray zone that existing controls were not designed to handle.
The industry is still heavily focused on access security, when the bigger emerging risk is decision integrity.
Are current OAuth-style delegated authorization frameworks sufficient for persistent AI agents with spending authority, or do we need a new standard?
OAuth-style delegation is a useful starting point, but it was not designed for continuously operating commercial agents with financial authority. Traditional delegated access assumes relatively bounded, user-initiated actions.
Persistent agents introduce different risk characteristics. Permissions may remain valid long after user intent has changed. Agents may also operate across multiple merchants, services, and time horizons in ways the original consent model did not fully anticipate.
What we likely need is an evolution rather than a complete replacement. That includes more granular, revocable, and context-aware permissions, stronger auditability of agent decisions, and clearer evidence trails that show not just what was authorized, but how that authority was exercised over time.
What controls should enterprises demand before allowing AI agents to connect to procurement systems or corporate credit facilities?
Enterprises should start with four non-negotiables.
First, tightly scoped and time-bound permissions. Agents should never have open-ended purchasing authority. Spending limits, category controls, supplier constraints, and expiration conditions should all be explicit.
Second, full decision transparency. Organizations need detailed logs showing why an agent selected a supplier, changed a vendor, or executed a transaction.
Third, real-time human override capability. If an agent begins behaving unexpectedly, teams must be able to pause or revoke authority immediately.
Fourth, strong post-transaction evidence capture. As disputes and audit challenges emerge, enterprises will need to demonstrate what the agent was allowed to do and what it actually did.
Without those controls, organizations are effectively extending financial authority without extending governance.
Could shopping agents become high-value intelligence targets for threat actors seeking behavioral profiling at scale?
Yes, and this risk is still under-discussed. Well-trained shopping or procurement agents accumulate extremely rich behavioral data, including purchasing preferences, price sensitivity, supplier relationships, and timing patterns. At scale, that data becomes commercially and operationally valuable. If compromised, these agents could provide adversaries with insight into both individual behavior and enterprise procurement strategy. That raises concerns not just around fraud, but around competitive intelligence, social engineering, and targeted manipulation.
As agents become more capable and more autonomous, they will increasingly look like high-value reconnaissance assets from an attacker’s perspective.
What are the privacy implications if agents begin interacting directly with other agents in marketplaces?
Agent-to-agent commerce introduces a new layer of opacity. When humans transact, there are natural friction points where intent, consent, and identity are visible. Autonomous agent interactions compress or remove many of those checkpoints.
From a privacy standpoint, there is a risk of excessive data exposure between agents, especially if negotiation, personalization, or dynamic pricing models require sharing behavioral signals. Over time, that could enable more detailed profiling than users or enterprises realize.
The key issue is not just data protection in the traditional sense, but data inference. Even limited signals exchanged repeatedly between agents can reveal sensitive patterns about buyers, organizations, or procurement strategies.
If an AI agent can autonomously negotiate prices or switch suppliers, how might adversaries manipulate the negotiation layer itself?
The negotiation layer will almost certainly become a new attack surface. Bad actors could attempt to influence pricing signals, poison training data, spoof supplier attributes, or exploit optimization logic to steer agents toward suboptimal or malicious counterparties.
Unlike traditional fraud, these attacks may not involve obvious rule-breaking. Instead, they may subtly distort the decision environment the agent relies on. From a risk perspective, the danger is that the transaction still appears fully authorized and policy-compliant. The manipulation happens upstream in the decision logic. This is another reason why detailed auditability and explainability of agent decisions will become critical.
How should organizations evaluate the security posture of an AI vendor that embeds purchasing autonomy into its platform?
Organizations should move beyond traditional vendor questionnaires and focus specifically on decision governance. Key areas to examine include how the vendor scopes agent permissions, how decisions are logged and explained, how quickly authority can be revoked, and how the platform detects anomalous agent behavior. It is also important to understand how the vendor separates model behavior from commercial incentives.
If a platform cannot clearly demonstrate how it preserves customer intent throughout the transaction lifecycle, that is a material risk signal.
How should security teams rethink third-party risk management when counterparties may increasingly be autonomous agents rather than humans?
Third-party risk models will need to expand from entity trust to decision trust. Historically, organizations assessed the security posture of the counterparty organization. With agentic commerce, teams must also assess the behavior, permissions, and governance of the autonomous systems acting on that party’s behalf.
This means incorporating agent-level telemetry, stronger transaction context, and more dynamic monitoring into third-party risk frameworks. It also means preparing for a world where disputes, liability questions, and security reviews increasingly center on how an autonomous decision was made, not just who initiated the relationship.

Subscribe to our breaking news e-mail alert to never miss out on the latest breaches, vulnerabilities and cybersecurity threats. Subscribe here!
