The EU AI Act's transition periods are over. For organisations deploying what the Act classifies as "high-risk" AI systems — those touching hiring, credit, education, biometric identification, or customer-facing decisions — the obligations are live and enforceable. Yet most AI teams have barely begun.
What Counts as High-Risk AI?
The EU AI Act defines high-risk AI systems across eight broad categories under Annex III. If your AI system is used in any of the following areas, you are almost certainly in scope:
- Hiring & HR decisions — recruitment screening, performance evaluation, promotion decisions
- Credit & financial services — creditworthiness scoring, insurance risk classification
- Education & vocational training — admissions, assessment, monitoring of learners
- Customer-facing services — AI used in essential private and public services
- Biometric systems — real-time or post-hoc identification from physical characteristics
- Critical infrastructure — AI managing utilities, transport, or financial systems
The Five Obligations Most Teams Are Missing
1. A Risk Management System Across the Full Lifecycle
Article 9 requires a documented, operational risk management system that isn't static — it must be updated as the system evolves. Every model retrain, every data schema change, and every deployment to a new context must trigger a risk review. A Word document written in 2024 does not satisfy Article 9 in 2026.
2. Continuous Record-Keeping and Logging
Article 12 mandates automatic logging of each "use" of the AI system with sufficient granularity to ensure traceability. These logs must be retained — at minimum — for the period the system remains in operation, and in many cases for years after decommissioning. If your inference logs aren't archived with full context today, you're already in violation.
3. Technical Documentation That Stays Current
Annex IV specifies detailed technical documentation requirements. Critically, this documentation must reflect the current state of the system. If you retrained your model last Tuesday, your Annex IV documentation must reflect that by deployment. Most compliance teams update documentation quarterly, at best.
4. Human Oversight Mechanisms
Article 14 requires that high-risk AI systems are designed to allow human overseers to effectively monitor, understand, and where necessary override outputs. This must be implemented at the system level and evidenced — policy statements are not enough.
5. Deployer-Specific Obligations
Many teams conflate "provider" and "deployer" obligations. If you are the company putting an AI system to use — even a third-party AI — you carry deployer obligations under Article 26. These include conducting a fundamental rights impact assessment, informing affected individuals, and maintaining operational logs. This catches most SaaS companies off guard.
The Compliance Stack Gap
The regulatory obligations are operational and continuous, but the average compliance stack is static and annual. Spreadsheets and policy documents cannot satisfy an obligation that requires real-time logging, lifecycle documentation, and evidence of continuous control operation. This is precisely the gap Vigilens was built to close.
What "Audit-Ready" Actually Looks Like
A regulator asking for evidence of compliance with Article 9 doesn't want a risk register in a spreadsheet. They want a timestamped, traceable record showing that risk was assessed at each stage of the AI lifecycle, that identified risks had mitigations, and that those mitigations were tested and evidenced — from your actual systems, not a document reconstructed the week before the audit.
The teams that will navigate enforcement without penalty are those that have wired compliance into their engineering workflows from the start.