Article 12 requires high-risk AI systems to automatically log events throughout their lifecycle. Here's what that actually requires — and why your agent's conversation history doesn't qualify.
The EU AI Act entered force on August 1, 2024. Its provisions are phasing in through 2027. High-risk system obligations under Annex III — which covers AI in healthcare, finance, employment, and law enforcement — apply from August 2, 2026. High-risk systems in regulated products (Annex I) follow by August 2, 2027.
Article 12 is one of the requirements that enterprises are least prepared for. Not because it's obscure — it's straightforward to read. But because the gap between what it requires and what most AI systems actually record is enormous.
The full text requires that high-risk AI systems include "logging capabilities that enable the automatic recording of events ('logs') while the high-risk AI systems are operating." Specifically, the logs must enable:
For certain high-risk systems (notably remote biometric identification under Annex III, point 1(a)), Article 12 specifies minimum log contents: period of each use, the reference database checked, input data that led to a match, and the identity of natural persons involved in verifying results.
For other high-risk systems — including those in healthcare, finance, and legal — Article 12 is less prescriptive about exact log fields but requires logging capabilities that are "appropriate to the intended purpose of the system." This is both more flexible and more demanding: you need to determine what's appropriate for your use case, and defend that determination to regulators.
Key implication: The "appropriate to the intended purpose" standard means you can't just log prompts and responses. For an agent that acts on knowledge, regulators will ask: what knowledge did the agent consult? Were there conflicts? What was the confidence level? Who reviewed the output?
Most AI agent deployments today log the conversation: the user's prompt, the agent's response, maybe the retrieved documents. This is necessary but nowhere near sufficient for Article 12 compliance. Here's what's missing:
A system that satisfies Article 12 for autonomous agent deployments needs to record, for each decision:
This is what a compliant audit record looks like. Every decision links to the knowledge state, the constraint verification chain, and the human oversight record. The hash chain ensures tamper-proof integrity.
| Requirement | Conversation logs | Guardrails SDK | Brain |
|---|---|---|---|
| Period of use | Yes | Yes | Yes |
| Reference database state | No | No | Yes — consensus snapshot |
| Decision provenance chain | No | Partial | Yes — full chain |
| Constraint verification | No | Yes | Yes — per gate |
| Human oversight record | No | No | Yes — linked to decision |
| Tamper-proof integrity | No | No | Yes — hash-chained |
Article 12 doesn't exist in isolation. Article 11 requires technical documentation describing the system's design, development, and intended purpose. Article 13 requires transparency — the ability to explain to users how the system works and what its limitations are.
Together, Articles 11, 12, and 13 form a documentation stack:
Brain addresses all three. The consensus scoring model is fully explainable (Article 13 — every score can be decomposed into source authority, evidence weight, and temporal decay). The hash-chained audit trail satisfies Article 12. And the system's design is documented with full provenance metadata (Article 11).
For Annex III high-risk systems (healthcare, finance, employment, law enforcement), obligations apply from August 2, 2026. For regulated product AI (Annex I), the deadline is August 2, 2027.
Non-compliance penalties for high-risk system obligations: up to €15M or 3% of global annual turnover, whichever is higher. (Violations of prohibited AI practices under Article 5 carry the higher tier: €35M or 7%.)
If you're deploying autonomous agents in domains covered by Annex III, you have months — not years — to get Article 12 compliant logging in place.
Brain's audit trail is designed for EU AI Act compliance out of the box. See it with your agent stack.
Get a Demo