EU AI Act  ·  Compliance Guide  ·  2026

Is My Company EU AI Act Compliant?
Here's How to Check.

By Vigilens  ·  12 March 2026  ·  10 min read

The EU AI Act is now in force. Fines of up to €35 million are enforceable. Full high-risk AI obligations arrive in August 2026. Yet most companies — including many actively deploying AI — have not done a formal compliance assessment. This guide explains what the Act requires, which articles matter most for your company, and how to find out whether you are compliant today.

What Is the EU AI Act?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for regulating AI systems. It came into force on 1 August 2024 and is being phased in over three years.

The Act's core approach is risk-based: the higher the potential harm an AI system could cause, the stricter the requirements it must meet. Most AI systems fall into one of four categories, from prohibited at the top down to unregulated at the bottom.

Does the EU AI Act Apply to My Company?

The Act has extraterritorial reach. It applies to you if any of the following are true:

In short: if you build or use AI that touches EU users or operations, the EU AI Act likely applies to you — regardless of where your company is incorporated.

The Four Risk Levels in Plain Language

Before diving into the specific articles, here is what each risk level means in practice for a typical tech company:

Unacceptable Risk — Article 5: Certain uses of AI are banned outright. If your system falls here, it cannot legally operate in the EU. This includes manipulative AI that exploits psychological vulnerabilities, social scoring by public authorities, and real-time remote biometric identification in public spaces. Enforcement active since 2 February 2025. Fines up to €35M or 7% of global turnover.

High Risk — Article 6 + Annex III: The most consequential category for most AI businesses. High-risk systems must meet comprehensive requirements covering risk management, data governance, documentation, human oversight, and conformity assessment. Full obligations from 2 August 2026. Fines up to €15M or 3% of global turnover.

Limited Risk — Articles 50–52: Chatbots, AI-generated content, and emotion recognition systems must clearly disclose AI involvement. Deepfakes must be labelled. Users must be told when they're talking to a bot. Relatively lightweight — but not optional.

Minimal / No Risk: Spam filters, AI-powered games, product recommendation engines, and most consumer AI tools in benign applications fall here. No specific obligations — but this can change if you add features that push you into higher-risk territory. Misclassification is a real risk.

Article 5 — Prohibited AI Practices

Article 5 is the Act's hardest line. It lists AI applications that are banned outright, with no exceptions or compliance pathway. If your system does any of the following, it cannot legally operate in the EU:

Article 5 has been enforceable since 2 February 2025. There is no grace period for these prohibitions.

Article 6 — High-Risk AI Classification

Article 6 is the gateway to the most demanding part of the Act. It defines when an AI system is classified as high-risk via two routes:

Article 6(1) — Product safety integration: If your AI is embedded in a product already covered by EU product safety legislation (medical devices, machinery, aviation, automotive) and that product requires third-party conformity assessment, your AI is automatically high-risk. Examples: AI in a medical diagnostic device, AI in industrial machinery, AI in aviation systems.

Article 6(2) — Annex III use cases: Even if your product is not covered by other EU legislation, your AI is high-risk if it is deployed for one of the eight specific use-case areas listed in Annex III — critical infrastructure, education, employment, essential services, law enforcement, migration, justice, and biometrics. This is the classification that catches most startups and SMEs by surprise.

33% of EU AI startups believe their systems would be classified as high-risk — compared to the 5–15% the Commission initially projected. The real number is likely somewhere in between.

Article 9 — Risk Management System

For high-risk AI systems, Article 9 is the backbone of compliance. It requires providers to establish, implement, document and maintain a continuous risk management system — not a one-time audit, but an ongoing operational process.

Article 9 requires you to:

  1. Identify and analyse known and foreseeable risks. Document all risks associated with the AI system when used as intended, and reasonably foreseeable misuse scenarios. This must be updated whenever the system is modified.
  2. Estimate and evaluate the risks. Assess the severity and probability of each risk. Consider the impact on fundamental rights and vulnerable groups specifically.
  3. Adopt risk management measures. Put in place appropriate mitigations. Prioritise eliminating risks by design, then apply safeguards, then provide information. Residual risk must be judged acceptable.
  4. Test and verify throughout the lifecycle. Testing must be conducted at appropriate intervals during development and before market placement. For high-risk systems, this includes testing on real-world conditions where possible.

Article 9 doesn't exist in isolation. It works in tandem with Article 10 (data governance), Article 11 (technical documentation), Article 12 (automatic logging), Article 13 (transparency), Article 14 (human oversight), and Article 17 (quality management system). Together, these form the full high-risk compliance framework.

Annex III — The Eight High-Risk Categories

This is the most important list in the Act for most AI companies. If your system operates in any of these categories, you are almost certainly in high-risk territory — and full compliance obligations apply from August 2026.

Important: The Act regulates specific uses of AI within these categories — not entire sectors. An AI tool used in a hospital for administrative scheduling is different from one used for diagnostic decisions. Context matters.

Compliance Checklist: What You Need to Have in Place

If your system is high-risk, here is a summary of what the Act requires you to have before the August 2026 deadline:

How to Check Your Company's Compliance Status

The fastest way to understand where your company stands is to run a structured classification assessment. This tells you:

Vigilens built a free classifier specifically for this. It walks through the same six-step assessment framework used in our full product — covering your entity type, potential prohibited practices, Annex III categories, special system types, jurisdiction, and your current compliance stage. The result is your classification, your obligations, and a recommended path forward.

It takes under 5 minutes. It is free. And it may be the most important 5 minutes you spend before August 2026.

Find out your EU AI Act status — right now

Run a free structured classification of your AI system. Get your risk level, your specific obligations under Articles 5, 6, and 9, and your path to compliance before the August 2026 deadline.

Classify my AI system — free → See how Vigilens works first

No account required. Takes under 5 minutes. Work email required.

Sources & references

  1. European Union (2024). Regulation (EU) 2024/1689 — Artificial Intelligence Act. Official Journal of the EU, 12 July 2024. eur-lex.europa.eu
  2. European Commission (2025). "AI Act — Shaping Europe's digital future." digital-strategy.ec.europa.eu
  3. EU AI Act Website (2025). "High-level summary of the AI Act." artificialintelligenceact.eu
  4. EU AI Act Website (2025). "Annex III — High-Risk AI Systems." artificialintelligenceact.eu
  5. DLA Piper (August 2025). "Latest wave of EU AI Act obligations take effect." dlapiper.com
  6. EU AI Act Compliance Checker — artificialintelligenceact.eu. Includes note that 33% of surveyed EU AI startups believe their systems are high-risk vs. 5–15% EC estimate.

Vigilens automates AI governance — turning obligations into executable controls with continuous evidence collection. Get early access.

✓  ACCESS REGISTERED. WE'LL BE IN TOUCH.

Company email required. No spam. Privacy Policy.