EU AI Act · Compliance Guide · 2026
Is My Company EU AI Act Compliant?
Here's How to Check.
The EU AI Act is now in force. Fines of up to €35 million are enforceable. Full high-risk AI obligations arrive in August 2026. Yet most companies — including many actively deploying AI — have not done a formal compliance assessment. This guide explains what the Act requires, which articles matter most for your company, and how to find out whether you are compliant today.
What Is the EU AI Act?
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for regulating AI systems. It came into force on 1 August 2024 and is being phased in over three years.
The Act's core approach is risk-based: the higher the potential harm an AI system could cause, the stricter the requirements it must meet. Most AI systems fall into one of four categories, from prohibited at the top down to unregulated at the bottom.
- Prohibited (Article 5) — Banned outright. Social scoring, subliminal manipulation, certain real-time biometrics. Enforceable since 2 February 2025.
- High Risk (Article 6 + Annex III) — Full compliance required. Covers critical infrastructure, employment, credit, education, law enforcement, biometrics, migration, and justice AI. Deadline: 2 August 2026.
- Limited Risk (Articles 50–52) — Transparency obligations only. Chatbots and AI-generated content must disclose AI involvement. Deepfakes must be labelled.
- Minimal / No Risk — No specific obligations. Spam filters, games, and most consumer AI tools fall here — but misclassification is a real risk.
Does the EU AI Act Apply to My Company?
The Act has extraterritorial reach. It applies to you if any of the following are true:
- → You are a provider (developer) of an AI system placed on the EU market
- → You are a deployer using an AI system in the EU, even if you didn't build it
- → You are a distributor or importer of AI systems sold into the EU
- → Your company is based outside the EU but your AI's output is used within the EU
In short: if you build or use AI that touches EU users or operations, the EU AI Act likely applies to you — regardless of where your company is incorporated.
The Four Risk Levels in Plain Language
Before diving into the specific articles, here is what each risk level means in practice for a typical tech company:
Unacceptable Risk — Article 5: Certain uses of AI are banned outright. If your system falls here, it cannot legally operate in the EU. This includes manipulative AI that exploits psychological vulnerabilities, social scoring by public authorities, and real-time remote biometric identification in public spaces. Enforcement active since 2 February 2025. Fines up to €35M or 7% of global turnover.
High Risk — Article 6 + Annex III: The most consequential category for most AI businesses. High-risk systems must meet comprehensive requirements covering risk management, data governance, documentation, human oversight, and conformity assessment. Full obligations from 2 August 2026. Fines up to €15M or 3% of global turnover.
Limited Risk — Articles 50–52: Chatbots, AI-generated content, and emotion recognition systems must clearly disclose AI involvement. Deepfakes must be labelled. Users must be told when they're talking to a bot. Relatively lightweight — but not optional.
Minimal / No Risk: Spam filters, AI-powered games, product recommendation engines, and most consumer AI tools in benign applications fall here. No specific obligations — but this can change if you add features that push you into higher-risk territory. Misclassification is a real risk.
Article 5 — Prohibited AI Practices
Article 5 is the Act's hardest line. It lists AI applications that are banned outright, with no exceptions or compliance pathway. If your system does any of the following, it cannot legally operate in the EU:
- ✕ AI that exploits vulnerabilities (age, disability, social circumstances) to distort behaviour in ways that cause harm
- ✕ Subliminal manipulation techniques that operate below conscious awareness
- ✕ Social scoring systems by public authorities that evaluate citizens based on social behaviour
- ✕ Real-time remote biometric identification in publicly accessible spaces (with narrow law enforcement exceptions)
- ✕ Biometric categorisation systems that infer race, political opinions, religious beliefs, or sexual orientation
- ✕ Predictive policing AI based solely on profiling or personality traits
- ✕ Emotion recognition systems in workplaces and educational institutions
Article 5 has been enforceable since 2 February 2025. There is no grace period for these prohibitions.
Article 6 — High-Risk AI Classification
Article 6 is the gateway to the most demanding part of the Act. It defines when an AI system is classified as high-risk via two routes:
Article 6(1) — Product safety integration: If your AI is embedded in a product already covered by EU product safety legislation (medical devices, machinery, aviation, automotive) and that product requires third-party conformity assessment, your AI is automatically high-risk. Examples: AI in a medical diagnostic device, AI in industrial machinery, AI in aviation systems.
Article 6(2) — Annex III use cases: Even if your product is not covered by other EU legislation, your AI is high-risk if it is deployed for one of the eight specific use-case areas listed in Annex III — critical infrastructure, education, employment, essential services, law enforcement, migration, justice, and biometrics. This is the classification that catches most startups and SMEs by surprise.
33% of EU AI startups believe their systems would be classified as high-risk — compared to the 5–15% the Commission initially projected. The real number is likely somewhere in between.
Article 9 — Risk Management System
For high-risk AI systems, Article 9 is the backbone of compliance. It requires providers to establish, implement, document and maintain a continuous risk management system — not a one-time audit, but an ongoing operational process.
Article 9 requires you to:
- Identify and analyse known and foreseeable risks. Document all risks associated with the AI system when used as intended, and reasonably foreseeable misuse scenarios. This must be updated whenever the system is modified.
- Estimate and evaluate the risks. Assess the severity and probability of each risk. Consider the impact on fundamental rights and vulnerable groups specifically.
- Adopt risk management measures. Put in place appropriate mitigations. Prioritise eliminating risks by design, then apply safeguards, then provide information. Residual risk must be judged acceptable.
- Test and verify throughout the lifecycle. Testing must be conducted at appropriate intervals during development and before market placement. For high-risk systems, this includes testing on real-world conditions where possible.
Article 9 doesn't exist in isolation. It works in tandem with Article 10 (data governance), Article 11 (technical documentation), Article 12 (automatic logging), Article 13 (transparency), Article 14 (human oversight), and Article 17 (quality management system). Together, these form the full high-risk compliance framework.
Annex III — The Eight High-Risk Categories
This is the most important list in the Act for most AI companies. If your system operates in any of these categories, you are almost certainly in high-risk territory — and full compliance obligations apply from August 2026.
- 1. Biometrics — Remote biometric identification, emotion recognition, biometric categorisation.
- 2. Critical Infrastructure — AI as safety components in road traffic, water, gas, heating, electricity, digital infrastructure.
- 3. Education & Training — AI that determines access to education, assesses students, or monitors participants.
- 4. Employment — Recruitment screening, CV filtering, performance evaluation, task allocation, monitoring.
- 5. Essential Services — Credit scoring, insurance risk assessment, access to healthcare, social benefits eligibility.
- 6. Law Enforcement — Polygraphs, risk assessment for crime, evidence analysis, crime prediction, profiling.
- 7. Migration & Border — Risk assessment, document verification, examination of asylum applications.
- 8. Justice & Democracy — AI assisting courts, applying the law, influencing elections or voter behaviour.
Important: The Act regulates specific uses of AI within these categories — not entire sectors. An AI tool used in a hospital for administrative scheduling is different from one used for diagnostic decisions. Context matters.
Compliance Checklist: What You Need to Have in Place
If your system is high-risk, here is a summary of what the Act requires you to have before the August 2026 deadline:
- Risk classification documented — You have formally assessed your AI system against the Act's categories and documented the outcome. This is step zero — without it, you cannot know what else is required.
- Risk management system established (Art. 9) — A documented, continuous risk identification and mitigation process is in place and updated with every significant model change.
- Data governance procedures in place (Art. 10) — Training and validation datasets are documented for relevance, representativeness, and freedom from bias.
- Technical documentation complete (Art. 11) — Full technical documentation exists covering system architecture, training methodology, performance metrics, and intended use.
- Automatic logging enabled (Art. 12) — The system logs events automatically to enable post-market monitoring and incident tracing.
- Human oversight mechanisms built in (Art. 14) — Natural persons can monitor, understand, intervene in, and override the AI system's outputs.
- Quality Management System established (Art. 17) — A formal QMS is documented and operational, covering the full AI development and deployment lifecycle.
- Conformity assessment completed (Art. 43) — The system has been assessed — internally or by a notified body — as conforming with all applicable requirements.
- EU database registration (Art. 71) — High-risk AI systems must be registered in the EU's public AI database before market placement.
How to Check Your Company's Compliance Status
The fastest way to understand where your company stands is to run a structured classification assessment. This tells you:
- Which category your AI systems fall into (prohibited, high-risk, limited, minimal)
- Which specific articles and obligations apply to you
- What your compliance gap looks like against the August 2026 deadline
- What your next steps should be to reach compliance
Vigilens built a free classifier specifically for this. It walks through the same six-step assessment framework used in our full product — covering your entity type, potential prohibited practices, Annex III categories, special system types, jurisdiction, and your current compliance stage. The result is your classification, your obligations, and a recommended path forward.
It takes under 5 minutes. It is free. And it may be the most important 5 minutes you spend before August 2026.
Find out your EU AI Act status — right now
Run a free structured classification of your AI system. Get your risk level, your specific obligations under Articles 5, 6, and 9, and your path to compliance before the August 2026 deadline.
Classify my AI system — free → See how Vigilens works firstNo account required. Takes under 5 minutes. Work email required.
Sources & references
- European Union (2024). Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the EU, 12 July 2024. eur-lex.europa.eu
- European Commission (2025). "AI Act — Shaping Europe's digital future." digital-strategy.ec.europa.eu
- EU AI Act Website (2025). "High-level summary of the AI Act." artificialintelligenceact.eu
- EU AI Act Website (2025). "Annex III — High-Risk AI Systems." artificialintelligenceact.eu
- DLA Piper (August 2025). "Latest wave of EU AI Act obligations take effect." dlapiper.com
- EU AI Act Compliance Checker — artificialintelligenceact.eu. Includes note that 33% of surveyed EU AI startups believe their systems are high-risk vs. 5–15% EC estimate.
Vigilens automates AI governance — turning obligations into executable controls with continuous evidence collection. Get early access.
Company email required. No spam. Privacy Policy.