What the EU AI Act requires for AI agent logging

The EU AI Act is 144 pages long. The logging requirements that matter for AI agent developers sit across four articles that keep referencing each other. Here’s what they say, when the deadlines hit, and where the gaps are.

Your agent is probably high-risk

The Act doesn’t mention “AI agents” by name. What matters is what the system does. If your agent scores credit applications, filters resumes, decides who gets healthcare benefits, prices insurance, or triages emergency calls, it falls under Annex III and is classified as high-risk.

Article 6(3) offers a way out. If the system doesn’t materially influence decision outcomes, it may not qualify. In practice, that’s a hard case to make for an agent that calls tools and acts on the results on its own.

General-purpose AI models have separate obligations under Chapter V. The model itself doesn’t become high-risk. The system built on top of it does, once deployed in a high-risk context. The model provider keeps its Chapter V obligations. The integrator picks up the high-risk provider obligations under Article 25.

The four articles that matter

Article 12 says high-risk AI systems “shall technically allow for the automatic recording of events (logs) over the lifetime of the system.” Two words in that sentence do the heavy lifting. Automatic means the system generates logs on its own, and you can’t satisfy this with manual documentation. Lifetime means from deployment to decommissioning, not just the current release.

Article 12(2) defines three categories your logs need to cover: situations where the system might present a risk or undergo a substantial modification, data for post-market monitoring, and data for operational monitoring by deployers. The regulation doesn’t prescribe a format or require specific fields, only those three purposes.

Article 13 says you need to document how deployers can collect and interpret the logs. Think of it as a technical integration guide for your logging layer, not a compliance manual.

Articles 19 and 26 set a six-month minimum for keeping logs. Financial services firms can fold AI logs into their existing regulatory paperwork. Everyone else holds onto them for at least half a year, possibly longer depending on sector rules.

Why regular logs aren’t enough

Your agent calls tools, delegates to sub-agents, gets LLM responses, and produces a final answer. Standard application logging captures all of that without any trouble.

The problem shows up six months later. A regulator asks you to prove the log wasn’t modified. Application logs live on infrastructure someone controls. They can be edited or replaced without anyone noticing.

Article 12 doesn’t say “tamper-proof.” But if your logs can be silently altered and you can’t show otherwise, their evidentiary value is zero. For high-risk systems, that’s a problem.

This gap is what led me to explore cryptographic signing for agent logs, an approach I’ve been building into an project called Asqav. The idea is simple: sign each agent action with a key the agent doesn’t hold, chain each signature to the previous one, and store the receipt where the agent can’t touch it. Change one entry and the chain breaks visibly.

But the pattern matters more than any single tool. The signing key lives outside the agent’s trust boundary, every action gets a receipt, and the receipts form a verifiable chain. Whether you use NIST FIPS 204 post-quantum signatures or a different scheme, any implementation that follows those principles will satisfy what Article 12 is getting at.

No standard yet

There’s no finalized technical standard for Article 12 logging yet. Two drafts are worth watching: prEN 18229-1, which covers AI logging and human oversight, and ISO/IEC DIS 24970, which focuses on AI system logging. Neither has been completed.

You’re building to a regulation that defines outcomes without specifying how. Teams that get logging right now will be ahead when the standards land. Teams that wait risk retrofitting under pressure.

Deadlines and penalties

Annex III obligations take effect August 2, 2026. The Commission proposed a delay through the Digital Omnibus package last November, possibly pushing to December 2027, and both the Council and Parliament adopted negotiating positions in March 2026 with trilogues underway. But nothing has passed into law, so August 2026 remains the enforceable date.

Miss it and the penalty is up to 15 million euros or 3% of worldwide annual turnover, whichever is higher. The statute applies this formula to all entities, but Article 99 requires penalties to be proportionate, and dissuasive, and instructs national authorities to take company size and economic viability into account. In practice, startups and SMEs should face lower fines than the maximum, even if the formula itself does not change.

Three questions

  • Can your system generate logs automatically at every decision point?
  • Can those logs survive tampering?
  • Can you keep them for six months in a format a regulator can read?

If not, August is closer than it looks.

Don't miss