Article 100: Administrative Fines on Providers of General-Purpose AI Models
Article 100 empowers the European Data Protection Supervisor (EDPS) to impose administrative fines on Union institutions, bodies, offices and agencies falling within the scope of the AI Act. Fines of up to EUR 1,500,000 apply for prohibited AI practices (Article 5) and up to EUR 750,000 for non-compliance with other provisions. The EDPS must consider criteria including nature, gravity, duration, cooperation, and prior infringements. Parties have the right to be heard and access the EDPS file before a decision is taken.
Who does this apply to?
- -Providers of general-purpose AI models (e.g., large language model developers, foundation model companies)
- -Providers of GPAI models with systemic risk (subject to additional Article 55 obligations)
- -The AI Office as the sole enforcement authority for GPAI model rules at EU level
- -Compliance teams at AI foundation model companies and their legal counsel
Scenarios
A major AI company releases a general-purpose AI model in the EU market without providing the required technical documentation under Article 53(1) or the copyright policy summary under Article 53(1)(d).
During an AI Office investigation into a GPAI model flagged for systemic risk, the provider submits a model evaluation report that omits critical safety testing results.
What Article 100 does (in plain terms)
Article 100 creates a dedicated fine regime for GPAI model providers, enforced centrally by the AI Office (a body within the European Commission). This is distinct from the general penalty framework in Article 99, which covers application-level AI systems enforced by national authorities.
Fine structure:
| Infringement | Maximum fine | |---|---| | (a) Infringement of GPAI model obligations (Articles 51–52, and by extension 53–54 for general GPAI and 55 for systemic risk GPAI) | EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher | | (b) Supplying incorrect, incomplete, or misleading information to the AI Office | EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher | | (c) Non-compliance with a measure requested by the AI Office under Article 93 | EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher |
Key design choices: - The AI Office is the sole enforcement authority — national market surveillance authorities do not enforce GPAI model rules. - The 'whichever is higher' rule means that for large companies (e.g., those with revenues exceeding EUR 500 million), the percentage-based cap will produce a much larger number than the flat EUR amount. - The criteria for setting fine amounts parallel those in Article 99(7) — nature, gravity, duration, cooperation, previous infringements, etc. - For GPAI models with systemic risk, separate obligations under Article 55 apply; breach of these triggers the same fine ceilings.
How Article 100 connects to the rest of the Act
- Article 51 — Classification of GPAI models: determines which models fall under the GPAI framework.
- Article 52 — Free and open-source GPAI: modified obligations for openly released models.
- Article 53 — Obligations for GPAI model providers: technical documentation, copyright policy, transparency — breach of these triggers Article 100 fines.
- Article 54 — Authorised representatives for GPAI model providers outside the EU.
- Article 55 — Additional obligations for GPAI models with systemic risk: model evaluations, adversarial testing, cybersecurity, incident reporting.
- Article 56 — Codes of practice for GPAI: voluntary commitments that may serve as evidence of compliance.
- Article 88 — AI Office: the body responsible for GPAI oversight.
- Article 93 — AI Office powers: the investigative and corrective measures whose non-compliance triggers fines under Article 100(1)(c).
- Article 99 — General penalty framework: Article 100 is the GPAI-specific counterpart.
- Article 113 — Application dates: GPAI obligations and Article 100 apply from 2 August 2025.
Practical guidance for GPAI model providers
GPAI model providers — particularly those developing large language models, multimodal foundation models, or models with systemic risk classification — should treat Article 100 as a first-priority enforcement risk:
1. Prepare technical documentation early — Article 53(1) requires detailed technical documentation before placing the model on the EU market. Non-compliance is directly finable under Article 100. 2. Implement a copyright compliance policy — Article 53(1)(d) requires a copyright policy summary; have it reviewed by IP counsel and published alongside the model. 3. For systemic risk models, invest in safety evaluation — Article 55 requires model evaluations, adversarial testing, and cybersecurity measures. These are technically demanding and should be budgeted as core development costs. 4. Respond fully to AI Office requests — Incomplete or misleading responses to information requests carry their own fine tier (EUR 7.5M / 1%). Implement an internal process for centralised, quality-controlled responses. 5. Engage with codes of practice — Article 56 codes of practice are voluntary, but adherence can serve as evidence of good-faith compliance and may mitigate fine calculations. 6. Model financial exposure — For a company with EUR 10 billion in annual revenue, the 3% ceiling means up to EUR 300 million in potential fines. Brief the board on this exposure. 7. Appoint an EU-based authorised representative — Non-EU GPAI providers must appoint a representative under Article 54; failure to do so is itself an infringement.
Official wording: Article 100
Article 100
Administrative fines on Union institutions, bodies, offices and agencies
1. The European Data Protection Supervisor may impose administrative fines on Union institutions, bodies, offices and agencies falling within the scope of this Regulation. When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:
(a) the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system concerned, as well as, where appropriate, the number of affected persons and the level of damage suffered by them;
(b) the degree of responsibility of the Union institution, body, office or agency, taking into account technical and organisational measures implemented by them;
(c) any action taken by the Union institution, body, office or agency to mitigate the damage suffered by affected persons;
(d) the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the Union institution, body, office or agency concerned with regard to the same subject matter;
(e) any similar previous infringements by the Union institution, body, office or agency;
(f) the manner in which the infringement became known to the European Data Protection Supervisor, in particular whether, and if so to what extent, the Union institution, body, office or agency notified the infringement;
(g) the annual budget of the Union institution, body, office or agency.
2. Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to EUR 1 500 000.
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Article 5, shall be subject to administrative fines of up to EUR 750 000.
4. Before taking decisions pursuant to this Article, the European Data Protection Supervisor shall give the Union institution, body, office or agency which is the subject of the proceedings conducted by the European Data Protection Supervisor the opportunity of being heard on the matter regarding the possible infringement. The European Data Protection Supervisor shall base his or her decisions only on elements and circumstances on which the parties concerned have been able to comment. Complainants, if any, shall be associated closely with the proceedings.
5. The rights of defence of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor's file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.
6. Funds collected by imposition of fines in this Article shall contribute to the general budget of the Union. The fines shall not affect the effective operation of the Union institution, body, office or agency fined.
7. The European Data Protection Supervisor shall, on an annual basis, notify the Commission of the administrative fines it has imposed pursuant to this Article and of any litigation or judicial proceedings it has initiated.
Recitals and legislative context
Recitals 166–168 explain the rationale for a centralised GPAI enforcement mechanism. Because general-purpose AI models are developed by a small number of companies and deployed across all Member States, the legislator determined that a single EU-level enforcement authority (the AI Office) is more appropriate than fragmented national enforcement. The recitals stress the importance of proportionality — fines should reflect the scale and impact of the provider — and note that the GPAI fine regime is complementary to the general penalty framework in Article 99.
Use the official preamble on EUR-Lex to read the recitals in full.
Compliance checklist
- Classify your AI model under Article 51 to determine whether GPAI obligations (and therefore Article 100 fines) apply.
- Prepare and maintain Article 53 technical documentation — this is the most common compliance gap and a direct fine trigger.
- Publish a copyright compliance policy summary as required by Article 53(1)(d) and keep it updated.
- For systemic risk GPAI models: implement model evaluations, adversarial testing, and cybersecurity measures per Article 55.
- Designate an internal function responsible for responding to AI Office information requests with quality-controlled, complete responses.
- Appoint an EU-based authorised representative under Article 54 if the GPAI provider is established outside the Union.
- Model financial exposure: calculate 3% of worldwide annual turnover and brief the board on maximum fine exposure under Article 100.
Assess your GPAI model's compliance exposure — start the free assessment.
Start Free AssessmentRelated Articles
Article 51: Classification of GPAI Models with Systemic Risk
Article 52: Procedure for Systemic Risk Classification of GPAI Models
Article 53: Obligations for Providers of General-Purpose AI Models
Article 54: Authorised Representatives of Providers of GPAI Models
Article 55: Obligations for Providers of GPAI Models with Systemic Risk
Article 56: Codes of Practice for GPAI Models
Article 93: Power to Request Measures
Article 99: Penalties for AI Act Infringements
Article 113: Entry into Force and Application Dates
Frequently asked questions
Does Article 100 apply to open-source GPAI models?
Partially. Article 52 provides modified (lighter) obligations for free and open-source GPAI models — but even open-source providers must comply with certain transparency requirements. Breach of applicable obligations remains finable under Article 100. Systemic risk obligations under Article 55 apply regardless of the model's licence.
Can national authorities also fine GPAI model providers?
No, not for GPAI model-specific obligations. The AI Office is the sole enforcement authority for Articles 51–56 compliance. However, if the same entity also deploys high-risk AI systems at the application level, national authorities can enforce those obligations under Article 99.
How does Article 100 interact with Article 99 fines?
They are complementary, not overlapping. Article 100 covers GPAI model obligations (provider-level, centrally enforced). Article 99 covers application-level AI system obligations (nationally enforced). A company that both develops a GPAI model and deploys it in a high-risk application could face fines under both articles for different infringements.