Understanding the EU AI Act: The Complete Guide to the World's First Comprehensive AI Regulation
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and is the world's first comprehensive legal framework governing artificial intelligence. It applies a risk-based regulatory model to AI systems placed on or used within the European Union, with obligations phased in between February 2025 and August 2027.
Whether you are a startup founder shipping an AI product, a procurement officer evaluating vendor tools, or a compliance professional building an internal programme, this guide covers everything you need to understand the regulation, determine how it affects your organisation, and start taking action.
TL;DR — Key facts about the EU AI Act
- The EU AI Act is the world's first binding, horizontal regulation of artificial intelligence, applicable across all sectors.
- It uses a risk-based approach with four tiers: prohibited, high-risk, limited risk, and minimal risk.
- The regulation applies to any organisation that places an AI system on the EU market or uses one within the EU — regardless of where that organisation is incorporated.
- Prohibited AI practices have been enforceable since 2 February 2025. High-risk obligations apply from 2 August 2026.
- Penalties reach up to EUR 35 million or 7% of global annual turnover for the most serious infringements.
- Providers of high-risk AI systems bear the heaviest compliance burden: technical documentation, risk management systems, conformity assessments, and post-market monitoring.
- The AI Act complements the GDPR — it does not replace it. Organisations processing personal data in AI systems must comply with both.
What is the EU AI Act and why does it exist?
The European Commission published its initial proposal for the AI Act in April 2021, and the final text was formally adopted in June 2024 after extensive negotiation between the European Parliament and Council. The regulation entered into force on 1 August 2024.
The legislative goals are threefold:
- Protect fundamental rights — prevent AI systems from undermining human dignity, non-discrimination, privacy, and democratic participation.
- Create legal certainty — give businesses a single, harmonised regulatory framework rather than a patchwork of 27 national rules.
- Foster trustworthy innovation — establish conditions under which AI systems can be developed and deployed responsibly, including through regulatory sandboxes and codes of conduct.
The legal basis for the AI Act is Article 114 of the Treaty on the Functioning of the European Union (TFEU), which allows the EU to adopt measures for the functioning of the internal market. This makes the AI Act a product-safety regulation at its core — it governs AI systems as products placed on the market, much like the EU's existing frameworks for medical devices, machinery, and toys.
Is your AI system high-risk?
Find out in 2 minutes — free, no signup required.
Take the free assessmentStructure of the regulation
The AI Act is organised into 13 chapters containing 113 articles, supported by 13 annexes:
The annexes define high-risk use-case areas (Annex III), technical documentation requirements (Annex IV), EU declaration of conformity content (Annex V), conformity assessment procedures (Annexes VI–VIII), and more.
The risk-based approach: four tiers explained
The AI Act's central organising principle is that regulatory requirements should be proportionate to risk. The higher the risk an AI system poses to health, safety, or fundamental rights, the stricter the obligations.
Tier 1 — Prohibited AI practices (Article 5)
These AI applications are banned outright because the EU considers their risks unacceptable:
- Social scoring by public authorities — ranking citizens by behaviour to determine access to services.
- Subliminal manipulation — deploying techniques below the threshold of conscious awareness to distort behaviour and cause harm.
- Exploitation of vulnerabilities — targeting people based on age, disability, or socio-economic situation.
- Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions for specific serious crimes).
- Emotion recognition in workplaces and educational institutions (except for medical or safety purposes).
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
- Predictive policing based solely on profiling or personality traits.
- Biometric categorisation using sensitive attributes (race, religion, sexual orientation, political opinions).
Real-world example: A retail chain deploys cameras that analyse shoppers' emotional states to adjust pricing dynamically. Under Article 5, this falls within the prohibition on emotion recognition used to infer feelings in a commercial setting that could exploit vulnerabilities, and may also constitute subliminal manipulation if it alters purchasing behaviour without the individual's awareness.
These prohibitions have been enforceable since 2 February 2025.
Tier 2 — High-risk AI systems (Articles 6–49)
High-risk AI systems are permitted but subject to extensive requirements before they can be placed on the market and throughout their lifecycle. There are two pathways to high-risk classification:
Pathway A (Article 6(1)): The AI system is a safety component of, or is itself, a product covered by existing EU harmonised legislation listed in Annex I (e.g., medical devices, machinery, toys, radio equipment) and is required to undergo a third-party conformity assessment under that legislation.
Pathway B (Article 6(2)): The AI system falls into one of the use-case areas listed in Annex III:
High-risk providers must implement:
- A risk management system running iteratively through the entire AI lifecycle.
- Data governance and data quality measures.
- Technical documentation meeting Annex IV requirements.
- Automatic logging (record-keeping) capabilities.
- Transparency and instructions of use for deployers.
- Human oversight mechanisms.
- Accuracy, robustness, and cybersecurity requirements.
- A quality management system.
- Conformity assessment — either self-assessment or through a notified body.
- CE marking and EU declaration of conformity.
- Registration in the EU database.
- Post-market monitoring and incident reporting.
Real-world example: A fintech company uses a machine-learning model to approve or reject consumer loan applications. This is credit scoring — Annex III, area 5(b) — making it high-risk. The provider must produce Annex IV technical documentation, run a conformity assessment, implement human oversight, and monitor the system post-deployment.
Real-world example: A logistics company deploys an AI tool to schedule warehouse shifts and evaluate worker performance. This falls under Annex III, area 4 (employment, worker management), triggering high-risk obligations including a fundamental rights impact assessment for the deployer.
For a detailed walkthrough of high-risk classification, see Is my AI system high-risk?.
Tier 3 — Limited-risk AI systems (Article 50)
Certain AI systems carry transparency obligations regardless of whether they are classified as high-risk:
- AI systems interacting with people — users must be informed they are interacting with an AI (e.g., chatbots).
- Emotion recognition and biometric categorisation systems — individuals subject to these systems must be notified.
- AI-generated or manipulated content (deepfakes) — must be labelled as artificially generated or manipulated.
- AI-generated text published to inform the public on matters of public interest — must be labelled as AI-generated (unless subject to human editorial review).
Real-world example: A SaaS company uses a GPT-powered chatbot for frontline customer support. Under Article 50, every user session must clearly disclose that the user is communicating with an AI system — not a human agent.
Tier 4 — Minimal risk (no specific obligations)
AI systems that do not fall into any of the above categories — such as AI-powered spam filters, inventory optimisation tools, or recommendation engines for entertainment — have no specific obligations under the AI Act. Providers of minimal-risk systems are encouraged (but not required) to voluntarily follow codes of conduct.
Determining your tier is the essential first step. Run the free AI Act assessment to classify your system in minutes.
Who does the AI Act apply to?
Territorial scope (Article 2)
The AI Act applies to:
- Providers (developers) that place an AI system on the EU market or put it into service in the EU, regardless of whether they are established inside or outside the EU.
- Deployers (users) of AI systems that are established in or located within the EU.
- Providers and deployers located outside the EU, where the output of the AI system is used in the EU.
This means a US-based company that sells an AI-powered HR tool to a German employer is a provider subject to the AI Act, and the German employer is a deployer with its own set of obligations.
Roles and their obligations
The AI Act defines several distinct roles, each carrying different responsibilities:
For a detailed breakdown of provider vs. deployer responsibilities, see AI Act provider vs. deployer obligations.
Important role-shifting rule: If a deployer substantially modifies a high-risk AI system, puts its own name or trademark on it, or changes its intended purpose, that deployer becomes a provider and assumes the full provider obligations (Article 25).
Key deadlines and phased enforcement timeline
The AI Act uses a staggered enforcement timeline:
For organisations developing or deploying high-risk AI, the 2 August 2026 deadline is the most critical near-term milestone. Use the EU AI Act compliance checklist to build a concrete action plan.
Relationship with the GDPR and other EU regulations
The AI Act does not exist in isolation. It forms part of a broader EU digital regulatory ecosystem:
AI Act and GDPR
Most AI systems process personal data, which means the GDPR applies alongside the AI Act. Key intersections include:
- Data Protection Impact Assessments (DPIAs) under GDPR Article 35 overlap significantly with AI Act risk management and fundamental rights impact assessments. Organisations can integrate these into a single process.
- Automated decision-making under GDPR Article 22 gives individuals the right not to be subject to solely automated decisions with legal or significant effects. The AI Act's human oversight requirements for high-risk systems reinforce this.
- Data minimisation (GDPR Article 5(1)(c)) can create tension with AI systems that require large training datasets. The AI Act addresses this by requiring data governance practices that ensure training data is relevant, representative, and free from errors.
- Lawful basis for processing remains a GDPR requirement. Using AI does not create a new lawful basis.
For a thorough comparison, see EU AI Act vs. GDPR: Differences and overlap. For AI-specific GDPR guidance, see Getting started with GDPR compliance for AI.
AI Act and other frameworks
- Product safety legislation — High-risk AI systems embedded in products already regulated (medical devices, machinery, toys) must comply with both the AI Act and the relevant sector regulation.
- Digital Services Act (DSA) — Platforms using AI recommender systems must comply with DSA transparency obligations and the AI Act's transparency requirements.
- Cyber Resilience Act (CRA) — AI systems that are digital products with network connectivity must also meet CRA cybersecurity requirements, though the AI Act's cybersecurity provisions apply specifically.
- EU Charter of Fundamental Rights — The AI Act explicitly references the Charter and is designed to operationalise its protections in the context of AI.
What companies need to do — by role
If you are a provider of a high-risk AI system
- Classify your system — determine whether it is high-risk via Article 6 and Annex III. Run the free AI Act assessment to get a preliminary classification.
- Build your AI systems inventory — catalogue every AI system you provide, including its risk tier, intended purpose, and target market. See How to build an AI systems inventory.
- Implement a risk management system — an iterative process running through the full lifecycle (Article 9).
- Prepare technical documentation — cover all nine sections of Annex IV.
- Conduct a conformity assessment — self-assessment or notified body, depending on the use case.
- Affix CE marking and draw up the EU declaration of conformity.
- Register in the EU database before placing the system on the market.
- Establish post-market monitoring and serious-incident reporting processes.
If you are a deployer of a high-risk AI system
- Verify provider compliance — confirm CE marking, documentation, and registration.
- Assign human oversight — designate competent individuals and train them.
- Retain logs — keep automatically generated logs for at least six months.
- Conduct a fundamental rights impact assessment (FRIA) — required for public-sector deployers and certain private deployers. See the FRIA guide.
- Notify affected persons — inform individuals that they are subject to a high-risk AI system.
- Cooperate with market surveillance authorities on request.
If you provide or deploy a GPAI model or system
Providers of general-purpose AI models must comply with Articles 51–56, including technical documentation, acceptable use policies, and copyright-related transparency. Models posing systemic risk face additional obligations: adversarial testing, incident monitoring, and energy-consumption reporting.
If your AI system is minimal risk
No specific legal obligations, but you should still consider voluntary codes of conduct and ensure you meet GDPR requirements if you process personal data.
Regulatory sandboxes and codes of conduct
The AI Act encourages innovation alongside regulation:
-
Regulatory sandboxes (Articles 57–63) — Each EU Member State must establish at least one AI regulatory sandbox by 2 August 2026. These provide controlled environments where companies can develop and test AI systems under regulatory supervision, with reduced compliance burdens during the testing phase. Priority access is given to SMEs and startups.
-
Codes of conduct (Article 95) — The Commission and Member States will encourage providers of non-high-risk AI systems to voluntarily adopt codes of conduct. These codes may cover environmental sustainability, AI literacy, inclusive design, and stakeholder participation.
Penalties overview
The AI Act establishes a tiered penalty structure enforced by national market surveillance authorities:
For SMEs and startups, the lower of the two figures (flat amount or percentage) applies in each tier, providing some proportionality.
National authorities also have the power to order withdrawal of non-compliant AI systems from the market.
For a detailed breakdown, see EU AI Act penalties and fines explained.
Impact on non-EU companies
The AI Act's extraterritorial reach mirrors the GDPR's approach. Non-EU companies are within scope if:
- They place an AI system on the EU market — selling, licensing, or making available an AI product to EU-based customers.
- They are a provider or deployer located outside the EU, but the output of the AI system is used in the EU — for instance, a US AI vendor whose risk-scoring output is used by an EU bank.
Non-EU providers must appoint an authorised representative established in the EU before placing a high-risk AI system on the market. That representative acts as the regulatory point of contact and must be able to provide all compliance documentation to authorities on request.
Real-world example: A Canadian SaaS company sells an AI-powered applicant-tracking system to EU employers. The company is a provider under the AI Act, must comply with all high-risk obligations (employment is an Annex III area), and must designate an EU authorised representative.
Common mistakes when interpreting the AI Act
- Assuming it only applies to EU companies. The AI Act has extraterritorial reach. If your AI output is used in the EU, you are likely in scope.
- Waiting for the 2027 deadline. Prohibited practices are already enforceable. AI literacy obligations are already in force. High-risk obligations apply from August 2026, not 2027.
- Confusing "deployer" with "end user." The AI Act defines deployers as organisations using AI under their own authority in a professional capacity — not individual consumers.
- Thinking GDPR compliance covers AI Act compliance. The two regulations overlap but have distinct requirements. A DPIA does not replace an AI Act risk management system or conformity assessment.
- Ignoring role-shifting rules. Modifying a high-risk system, rebranding it, or changing its intended purpose can make a deployer into a provider — with the full compliance burden that entails.
- Assuming "not high-risk" means "no obligations." Even minimal-risk AI systems must comply with the GDPR if they process personal data, and transparency obligations under Article 50 may apply to chatbots, deepfakes, or emotion-recognition features regardless of risk tier.
- Treating compliance as a one-time exercise. The AI Act requires continuous post-market monitoring, iterative risk management, and ongoing conformity. This is a lifecycle obligation.
Step-by-step: how to start your EU AI Act compliance journey
- Educate your team — meet the AI literacy obligation under Article 4 by ensuring all staff involved with AI have sufficient understanding of the regulation.
- Inventory your AI systems — document every AI system you develop, deploy, procure, or integrate. See How to build an AI systems inventory.
- Classify each system — determine the risk tier for each system using Article 6, Annex III, and the free AI Act assessment.
- Map your role — are you a provider, deployer, importer, or distributor for each system? Review provider vs. deployer obligations.
- Gap analysis — compare your current practices against the applicable obligations for your risk tier and role.
- Build a remediation plan — prioritise actions by enforcement deadline, starting with prohibited practices (already enforceable) and working toward August 2026 high-risk requirements. The compliance checklist provides a detailed action plan.
- Implement technical requirements — risk management, data governance, documentation, human oversight, monitoring.
- Complete conformity assessment — self-assessment or notified body, depending on the category.
- Register, mark, and launch — register in the EU database, affix CE marking, and issue the EU declaration of conformity.
- Monitor continuously — post-market monitoring, incident reporting, and iterative risk reassessment.
Frequently asked questions
Does the EU AI Act apply to my company if I am based outside the EU?
Yes, if you place an AI system on the EU market, deploy one in the EU, or produce output used in the EU. The territorial scope in Article 2 is intentionally broad and mirrors the GDPR's extraterritorial reach.
When do I need to comply?
It depends on the obligation. Prohibited practices and AI literacy requirements are already enforceable (since 2 February 2025). High-risk obligations and deployer duties apply from 2 August 2026. Safety-component obligations for products under Annex I apply from 2 August 2027.
Is my chatbot a high-risk AI system?
Most general-purpose chatbots are not high-risk under Annex III, but they are subject to the transparency obligation requiring disclosure that users are interacting with AI. However, if a chatbot is used in a high-risk context — such as providing medical advice that influences treatment decisions — it may be classified as high-risk based on its intended purpose.
What is the difference between the AI Act and the GDPR?
The GDPR governs the processing of personal data. The AI Act governs AI systems as products, regardless of whether they process personal data. Many AI systems fall under both regulations. See EU AI Act vs. GDPR for a detailed comparison.
Can I do the conformity assessment myself?
For most high-risk AI systems under Annex III, self-assessment is permitted (Annex VI procedure). Third-party conformity assessment through a notified body is required only for biometric identification systems and for AI systems that are safety components of products already requiring third-party assessment. See the conformity assessment guide.
What tools can help with EU AI Act compliance?
Compliance platforms can automate risk classification, documentation generation, gap analysis, and monitoring. See the comparison of EU AI Act compliance tools for an independent overview. To start with a quick risk classification, run the free AI Act assessment.
Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.
Check your AI system's compliance
Free assessment — no signup required. Get your risk classification in minutes.
Run free assessment


