EU AI Act Penalties and Fines Explained: What Non-Compliance Actually Costs in 2026
TL;DR
- The EU AI Act establishes three fine tiers: up to EUR 35 million / 7% of global turnover for prohibited practices, up to EUR 15 million / 3% for high-risk and transparency violations, and up to EUR 7.5 million / 1.5% for misleading information to authorities.
- Fines are calculated as the higher of the fixed euro amount or the revenue percentage for large organisations, and the lower of the two for SMEs/startups — a critical proportionality mechanism.
- Enforcement is split between national market surveillance authorities (most AI Act violations) and the European Commission / AI Office (general-purpose AI model obligations).
- Prohibited practices have been enforceable since 2 February 2025; the Commission has already launched initial investigations. High-risk obligations become enforceable 2 August 2026.
- Enforcement is triggered by complaints, serious incident reports, proactive surveillance, sector-specific regulators, and whistleblower protections under Article 87.
- Fines exceed GDPR maximums for the most serious violations (7% vs 4% of global turnover), making the AI Act the harshest EU digital regulation by penalty.
- Practical risk reduction starts with classification, documentation, supply chain verification, and incident reporting processes.
The EU AI Act carries the largest regulatory fines ever imposed on AI systems — exceeding even GDPR. If your organisation provides or deploys AI in the EU market, understanding the penalty structure is not optional. It is the financial backstop that gives every other obligation in the Act real teeth.
This guide covers the three fine tiers in detail, who enforces them, what triggers enforcement, how fines are calculated for different company sizes, how the AI Act compares with GDPR, and practical steps to reduce your exposure before the 2 August 2026 deadline.
The three fine tiers under Article 99
The AI Act establishes a tiered penalty structure in Article 99. For each tier, the applicable fine is whichever amount is higher — the fixed euro figure or the revenue percentage. (For SMEs and startups, this calculation is inverted — see the adjustment section below.)
Tier 1: Prohibited practices — up to EUR 35 million or 7% of global annual turnover
This is the harshest tier. It applies to violations of Article 5, which establishes an absolute ban on AI practices the EU considers incompatible with fundamental rights. These prohibitions have been enforceable since 2 February 2025.
The prohibited practices are:
-
Subliminal, manipulative, or deceptive AI techniques: Systems that deploy techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, to materially distort behaviour in a way that causes or is reasonably likely to cause significant harm.
-
Exploitation of vulnerabilities: AI systems that exploit vulnerabilities of specific groups due to age, disability, or social or economic situation to materially distort their behaviour in a way that causes significant harm.
-
Social scoring by public authorities: AI systems that evaluate or classify natural persons based on social behaviour or personal characteristics, leading to detrimental treatment disproportionate to the context or unjustified by the behaviour.
-
Predictive policing based solely on profiling: AI systems that assess the risk of an individual committing a criminal offence based solely on profiling or personality traits — without objective, verifiable facts or an assessment of individual involvement in criminal activity.
-
Untargeted facial image scraping: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
-
Emotion recognition in workplaces and education: AI systems that infer emotions of natural persons in the workplace or educational institutions, except where the system is intended for medical or safety reasons.
-
Biometric categorisation for sensitive attributes: AI systems that categorise natural persons based on biometric data to infer or deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. (Lawful labelling or filtering of biometric datasets in law enforcement is excluded.)
-
Real-time remote biometric identification in public spaces for law enforcement: Banned with narrow exceptions for targeted searches for specific crime victims, prevention of specific imminent threats, and identification of suspects for specific serious criminal offences.
Real-world example — social media platform: A social media company deploys an AI system that analyses user behaviour patterns to identify psychologically vulnerable users and serves them targeted content designed to increase engagement through addictive mechanisms. This could constitute a subliminal/manipulative technique exploiting vulnerabilities — potentially triggering Tier 1 penalties.
Real-world example — employer surveillance: A logistics company installs AI-powered cameras in its warehouse that continuously monitor workers' facial expressions to detect "disengagement" and flag underperforming employees. Emotion recognition in the workplace is prohibited under Article 5(1)(f). Tier 1 applies.
Tier 2: High-risk and transparency violations — up to EUR 15 million or 3% of global annual turnover
This tier covers non-compliance with the obligations that form the core of the AI Act's regulatory framework:
- High-risk AI system obligations (Articles 8-15): risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity
- Provider obligations (Articles 16-22): quality management, conformity assessment, CE marking, EU database registration, post-market monitoring
- Deployer obligations (Article 26): human oversight, log retention, transparency to affected persons
- FRIA obligations (Article 27): fundamental rights impact assessments for certain deployers
- Transparency obligations for limited-risk systems (Article 50): disclosure of AI interaction, labelling of AI-generated content, deepfake disclosure
- GPAI model provider obligations (Articles 51-55): model documentation, copyright compliance, systemic risk assessment
- Conformity assessment violations (Article 43): failure to complete conformity assessment, false declarations
This is the tier most organisations need to prepare for. It covers the obligations that become enforceable on 2 August 2026 and applies to the broadest range of actors — providers, deployers, importers, and distributors.
Real-world example — recruitment AI provider: A company develops an AI-powered CV screening tool (high-risk under Annex III Domain 4) and places it on the market without completing conformity assessment, preparing technical documentation, or establishing a risk management system. The company faces up to EUR 15 million or 3% of global turnover for non-compliance with Articles 9, 11, and 43.
Real-world example — bank deployer: A European bank deploys a third-party AI credit scoring system without assigning human oversight personnel, retaining system logs, or informing credit applicants that an AI system is involved in the decision. The bank faces Tier 2 penalties for violating deployer obligations under Article 26 — even though it did not build the system.
Real-world example — chatbot without disclosure: A customer service company deploys an AI chatbot that handles consumer complaints without disclosing to users that they are interacting with an AI system. This violates Article 50 transparency obligations — Tier 2.
Tier 3: Incorrect or misleading information — up to EUR 7.5 million or 1.5% of global annual turnover
This tier applies when organisations provide incorrect, incomplete, or misleading information to national competent authorities or notified bodies in the course of regulatory interactions:
- False or incomplete responses to information requests from market surveillance authorities
- Misleading documentation in conformity assessment processes
- Inaccurate registration data in the EU database
- False statements or omissions in serious incident reports
This tier may appear less severe, but it has a compounding effect: organisations that mislead regulators lose the goodwill that might otherwise mitigate penalties for underlying Tier 1 or Tier 2 violations.
Real-world example: A provider registers an AI system in the EU database, describing it as a "customer analytics tool" when it is actually used for creditworthiness assessment. The inaccurate registration constitutes misleading information — Tier 3 — and the underlying failure to comply with high-risk obligations adds Tier 2 exposure.
SME and startup adjustment mechanism
The AI Act includes a proportionality adjustment for SMEs and startups that fundamentally changes the fine calculation. For these organisations, the penalty caps are applied using whichever is lower — the fixed euro amount or the percentage of turnover. This is the inverse of the standard calculation.
How it works in practice
For a startup with EUR 2 million in annual turnover, the maximum Tier 1 fine is EUR 140,000 — not EUR 35 million. The SME adjustment ensures that penalties remain meaningful without being existentially destructive for smaller companies.
Qualifying as an SME: The EU definition of SME (fewer than 250 employees, annual turnover not exceeding EUR 50 million, or annual balance sheet total not exceeding EUR 43 million) applies. Startups are companies that have been operating for fewer than five years and meet certain criteria related to innovation and scalability.
Caution: The SME adjustment applies to the maximum fine, not to the likelihood of enforcement. National authorities may be just as likely to investigate a non-compliant startup as a non-compliant multinational — particularly if the startup's AI system causes actual harm.
Who enforces the AI Act?
Enforcement responsibility is divided between national and EU-level bodies, each with distinct competencies.
National enforcement
Each EU Member State must designate at least one market surveillance authority responsible for receiving complaints, conducting investigations and inspections, ordering corrective actions, and imposing fines. In many Member States, the data protection authority (which already enforces GDPR) is expected to take this role. Some countries are establishing dedicated AI authorities. Member States must also designate notifying authorities to supervise notified bodies performing third-party conformity assessments.
EU-level enforcement
- The European Commission: Exclusively supervises and enforces rules for general-purpose AI models (GPAI). The Commission can investigate GPAI providers directly, request information, conduct evaluations, and impose fines.
- The European AI Office: Coordinates enforcement across Member States, provides technical expertise, and supports the Commission on GPAI enforcement.
- Scientific panel of independent experts: Advises the AI Office on GPAI model evaluation and systemic risk assessment.
Enforcement powers
Market surveillance authorities can: request documentation, data, and source code access; obtain access to AI systems through APIs and testing environments; conduct unannounced on-site inspections; perform or commission testing and auditing; require corrective measures within specified deadlines; order market withdrawal or recall of non-compliant systems; restrict or prohibit use of systems presenting serious risks; impose administrative fines; and publish enforcement decisions including the infringer's identity.
What triggers enforcement
Enforcement actions are typically initiated through five channels:
- Complaints from affected individuals, consumer organisations, employee unions, or civil liberties groups — expected to be the primary early enforcement driver, mirroring GDPR patterns.
- Serious incident reports by providers or deployers (mandatory within 15 days under Article 73). Each report triggers a compliance review. Failure to report is itself a Tier 2 violation.
- Proactive market surveillance by national authorities — systematic review of AI systems on the market, targeted sector audits, and monitoring the EU database for completeness.
- Sector-specific regulators (financial, health, transport) flagging non-compliance to AI market surveillance authorities. Systems that fail both the AI Act and sector regulation draw attention from multiple enforcement bodies.
- Whistleblower reports protected under Article 87. Employees and contractors who report violations are protected against retaliation, creating an enforcement channel organisations cannot control internally.
Enforcement timeline
Early enforcement signals
Enforcement is not waiting for the August 2026 deadline. In early 2026, the European Commission launched the first formal investigations into potential prohibited AI practices. Several national data protection authorities have signalled they will begin AI Act enforcement alongside GDPR activities, and the AI Office has published initial interpretive guidance on prohibited practices.
AI Act fines compared with GDPR
For organisations already familiar with GDPR enforcement, the comparison is instructive:
The AI Act surpasses GDPR in maximum penalty severity. For the most serious violations (prohibited practices), the 7% / EUR 35 million cap is 75% higher than GDPR's 4% / EUR 20 million top tier. For high-risk system violations (the tier affecting most organisations), the 3% / EUR 15 million cap is moderately lower than GDPR's top tier but comparable to GDPR's second tier.
Key lesson from GDPR enforcement: The largest GDPR fines have been imposed for systemic non-compliance — not isolated errors. Organisations that lacked documentation, ignored data subject rights, or failed to implement basic security measures received the harshest penalties. The same pattern is expected under the AI Act: systematic failure to implement technical documentation, risk management, or conformity assessment will attract the largest fines — while organisations that made good-faith efforts with some gaps will face corrective orders rather than maximum penalties. For a full comparison, see our AI Act vs GDPR guide.
Real-world penalty calculation examples
Example 1: Large multinational provider
Profile: Global technology company, EUR 10 billion annual turnover, provides an AI hiring platform classified as high-risk under Annex III.
Violation: Failed to complete conformity assessment, no technical documentation per Annex IV, no risk management system. Discovered through a complaint by a job applicant who was rejected by the AI system.
Tier: Tier 2 (high-risk obligations).
Calculation: Higher of EUR 15 million or 3% of EUR 10B = EUR 300 million. Maximum fine: EUR 300 million.
Example 2: Mid-size European bank (deployer)
Profile: European bank, EUR 200 million annual turnover, deploys a third-party AI credit scoring system.
Violation: No human oversight assigned, no notification to credit applicants that AI is involved, no log retention, no FRIA despite being an essential service provider.
Tier: Tier 2 (deployer obligations).
Calculation: Higher of EUR 15 million or 3% of EUR 200M = EUR 6 million. Maximum fine: EUR 15 million.
Example 3: Startup deploying prohibited practice
Profile: AI startup, EUR 3 million annual turnover, 20 employees, developed an AI system that analyses social media data to profile job seekers' personalities for employers.
Violation: Potential biometric categorisation to infer sensitive attributes (if the system deduces characteristics like political views or religious beliefs from behavioural data) — prohibited under Article 5.
Tier: Tier 1 (prohibited practices), with SME adjustment.
Calculation: Lower of EUR 35 million or 7% of EUR 3M = EUR 210,000. Maximum fine: EUR 210,000 — plus mandatory cessation of the prohibited practice.
Example 4: SME AI provider with documentation failures
Profile: European AI company, EUR 15 million annual turnover, 80 employees, provides an AI proctoring system (high-risk under Annex III Domain 3).
Violation: Inadequate technical documentation, no post-market monitoring system, outdated risk management assessment.
Tier: Tier 2, with SME adjustment.
Calculation: Lower of EUR 15 million or 3% of EUR 15M = EUR 450,000. Maximum fine: EUR 450,000.
How to reduce your enforcement risk
1. Classify every AI system in your organisation
You cannot manage risk you have not identified. Build your AI systems inventory and run the free AI Act risk classification for each system. Classification determines which fine tier applies.
2. Prioritise prohibited practices review
Tier 1 fines are already enforceable. Conduct an immediate review of all AI systems for potential prohibited practices — social scoring, manipulative techniques, emotion recognition in workplaces, biometric categorisation for sensitive attributes, and untargeted facial image scraping. If any system is close to the line, obtain a legal opinion before the regulators reach their own conclusion.
3. Document comprehensively
The most common enforcement failure across EU regulations is lack of documentation, not malicious non-compliance. Start your technical documentation and risk management records now. Documentation is evidence of good faith — and good faith is a mitigating factor in penalty determination.
4. Verify your supply chain
If you deploy AI systems built by third parties, request evidence of their compliance: EU declaration of conformity, CE marking, instructions for use, evidence of conformity assessment. The AI Act makes deployer verification an explicit legal obligation — inability to verify is a compliance failure, not just a procurement inconvenience. See the full supply chain analysis in our provider vs deployer guide.
5. Train your team on AI literacy
AI literacy training is already enforceable under Article 4 (since 2 February 2025). Ensure that staff working with AI systems understand the regulatory requirements relevant to their role. This includes human oversight personnel, procurement teams evaluating AI vendors, and developers building AI systems.
6. Establish serious incident reporting processes
Set up internal processes to detect and report serious incidents within the 15-day reporting window. This requires: defined escalation pathways, clear criteria for what constitutes a "serious incident," assigned responsibility for regulatory notifications, and templates for incident reports. Failure to report is itself a Tier 2 violation that compounds the underlying incident.
7. Engage with regulatory sandboxes
Regulatory sandboxes established by national authorities offer a structured environment to test AI systems under regulatory supervision. Participation demonstrates proactive compliance engagement and provides access to regulatory guidance that can prevent future violations.
8. Monitor enforcement developments
Follow enforcement actions in your sector and jurisdiction. GDPR enforcement showed strong patterns: certain sectors (tech, telecoms, financial services) received disproportionate attention, and certain violation types (insufficient legal basis, inadequate transparency) were enforced first. Early AI Act enforcement patterns will similarly reveal where regulators are focusing.
Common mistakes that increase enforcement exposure
Mistake 1: Assuming enforcement will be delayed or lenient
GDPR enforcement started slowly but accelerated dramatically. By 2024, cumulative GDPR fines exceeded EUR 4 billion. The AI Act's enforcement infrastructure is being built on GDPR foundations — the learning curve for regulators will be shorter this time.
Mistake 2: Treating compliance as a one-time project
The AI Act requires continuous compliance: ongoing risk management, post-market monitoring, incident reporting, and documentation updates. A conformity assessment completed in July 2026 does not guarantee compliance in July 2027 if the system's performance has changed or new risks have emerged.
Mistake 3: Ignoring deployer obligations
Many organisations focus exclusively on provider obligations because they are more extensive. But deployer non-compliance — particularly failure to assign human oversight, inform affected persons, or complete FRIAs — carries the same Tier 2 penalty exposure.
Mistake 4: Failing to account for cumulative violations
The AI Act allows penalties for each violation. An organisation with multiple non-compliant AI systems faces potential penalties per system, not a single consolidated fine. A company with ten high-risk AI systems, none of which have conformity assessments, faces ten potential violations.
Mistake 5: Underestimating reputational damage
Regulatory fines are the quantifiable cost. The reputational cost — lost contracts, customer attrition, damaged brand, and increased scrutiny on future products — often exceeds the fine itself.
Frequently asked questions
Can authorities fine both the provider and the deployer for the same AI system?
Yes. Provider and deployer obligations are independent. If a provider fails to complete conformity assessment and a deployer fails to verify that conformity assessment was completed, both can be fined for their respective violations. The AI Act does not create joint liability — each actor is responsible for its own obligations.
Are AI Act fines per violation or per system?
The AI Act allows fines per infringement. In practice, national authorities will exercise discretion, and a single enforcement action may address multiple violations related to a single system. However, organisations operating multiple non-compliant systems face the risk of cumulative enforcement.
What mitigating factors reduce fines?
Article 99(6) requires authorities to consider: the nature, gravity, and duration of the infringement; whether corrective measures were taken; the degree of cooperation with authorities; previous infringements; the impact on affected persons; the degree of responsibility; and the manner in which the infringement became known. Good-faith compliance efforts with documented gaps will be treated more favourably than systemic non-compliance.
Can individuals sue companies for AI Act violations?
Yes. Article 85 provides that affected persons have the right to lodge complaints with market surveillance authorities and to obtain an effective judicial remedy. This creates both regulatory and civil enforcement channels, similar to GDPR's dual-track approach.
Will GDPR and AI Act fines be imposed separately for the same conduct?
Potentially, yes. If an AI system violates both GDPR (e.g., unlawful profiling) and the AI Act (e.g., non-compliant high-risk system), both sets of penalties can apply. However, the AI Act includes a coordination provision to avoid disproportionate cumulation. Where the same conduct triggers both GDPR and AI Act penalties, authorities should coordinate to ensure proportionality — though this mechanism has not yet been tested in practice. See our detailed comparison of AI Act and GDPR obligations.
Is there an amnesty or grace period for organisations that voluntarily disclose non-compliance?
The AI Act does not include a formal amnesty or leniency programme. However, cooperation with authorities and voluntary corrective measures are explicitly listed as mitigating factors in fine determination. Organisations that discover non-compliance and proactively address it before enforcement action will generally face lighter consequences than those that are discovered through complaints or investigations.
Start with classification, build from there
The penalty structure is designed to be deterrent — and the numbers are large enough to command attention at any company size. But the AI Act also offers proportionality for smaller companies, clear pathways to compliance, and mitigating factors that reward good-faith efforts.
The organisations that will face the harshest enforcement are those that ignored the requirements entirely, not those that made genuine compliance efforts with some remaining gaps.
Start with classification. Build your documentation. Verify your supply chain. Use the complete AI Act guide to understand each obligation in detail. And run the free AI Act assessment to get a clear picture of where your organisation stands today.
Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.
Prüfen Sie die Compliance Ihres KI-Systems
Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.
Kostenlose Bewertung starten


