Article 52: Procedure for Systemic Risk Classification of GPAI Models
Article 52 sets the procedural rules for how a general-purpose AI model becomes classified as having systemic risk under Article 51. It covers notification by the GPAI provider to the AI Office, the AI Office's own-initiative designation power, provider rebuttal rights, and publication of the systemic-risk model list. The procedure ensures that classification is transparent, that providers can present arguments, and that the market knows which models carry systemic-risk duties.
Who does this apply to?
- -Providers of GPAI models that meet or may meet the systemic-risk thresholds in Annex XIII or Article 51
- -The AI Office (as the body conducting classification and maintaining the public list)
- -Compliance and legal teams monitoring systemic-risk designation for their models or upstream models they integrate
Scenarios
A foundation model provider trains a new model exceeding the cumulative compute threshold in Annex XIII; they must notify the AI Office without delay.
The AI Office identifies a model with high-impact capabilities from market evidence and opens a classification proceeding on its own initiative.
A provider presents substantiated arguments that their model does not meet systemic-risk criteria despite crossing the compute threshold.
What Article 52 does (in plain terms)
Article 52 is the process engine behind systemic-risk classification. While Article 51 sets the criteria (when a model is systemic risk), Article 52 answers how the classification actually happens:
1. Provider notification — GPAI providers that meet Article 51(1) criteria (e.g. cumulative compute threshold in Annex XIII) must notify the AI Office without delay. 2. AI Office own-initiative power — The AI Office may designate a model as systemic risk on its own initiative, based on the criteria in Article 51, using information from market monitoring, evaluations, or qualified alerts. 3. Provider rebuttal — Providers may present sufficiently substantiated arguments to demonstrate that their model does not meet the systemic-risk criteria despite indicators. 4. Publication — The AI Office maintains and publishes a list of GPAI models with systemic risk. 5. Re-classification — If conditions change (model updated, new evidence), classification can be reviewed.
The procedure ensures due process: providers are not silently classified—they receive notice and can respond before or after designation.
How Article 52 connects to the rest of the Act
- Article 51 — Criteria for systemic-risk classification (Article 52 is the procedure that operationalises it).
- Article 53 — Baseline GPAI provider obligations (apply regardless of systemic-risk designation).
- Article 55 — Additional obligations for GPAI models with systemic risk (triggered once Article 52 designation is confirmed).
- Article 56 — Codes of practice may inform how systemic-risk evaluation and mitigation are demonstrated.
- Annex XIII — Criteria and thresholds for systemic-risk classification (the quantitative backbone behind Article 51 that Article 52 procedures operationalise).
- Annex XI — Section 2 documentation (evaluation, red-teaming, architecture) becomes mandatory after systemic-risk designation.
- Article 64 — AI Office powers referenced in classification proceedings.
- Article 113 — Application dates (Chapter V applied from 2 August 2025).
Official wording: Article 52 (English)
1. Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.
2. The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that requirement, the general-purpose AI model does not present, due to its specific characteristics, systemic risks and therefore should not be classified as a general-purpose AI model with systemic risk.
3. Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be considered to be a general-purpose AI model with systemic risk.
4. The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following a qualified alert from the scientific panel pursuant to Article 90(1), point (a), on the basis of criteria set out in Annex XIII.
The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex XIII by specifying and updating the criteria set out in that Annex.
5. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision.
6. The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex contextualise the systemic-risk classification procedure, due process for providers, and the AI Office's role in GPAI oversight. Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Monitor whether your GPAI model meets or approaches Annex XIII thresholds (cumulative compute, high-impact capabilities).
- Set up an internal trigger to notify the AI Office without delay if thresholds are met.
- Prepare documentation to substantiate arguments if you believe designation is not warranted despite threshold crossing.
- Track the AI Office's published list of systemic-risk GPAI models—relevant for downstream providers integrating those models.
- If designated: immediately operationalise Article 55 obligations and Annex XI Section 2 documentation.
Assess your GPAI model against systemic-risk thresholds—free assessment.
Start Free AssessmentRelated Articles
Article 51: Classification of GPAI Models with Systemic Risk
Article 53: Obligations for Providers of General-Purpose AI Models
Article 54: Authorised Representatives of Providers of GPAI Models
Article 55: Obligations for Providers of GPAI Models with Systemic Risk
Article 56: Codes of Practice for GPAI Models
Article 64: European Artificial Intelligence Board
Annex XI: Technical Documentation for Providers of General-Purpose AI Models
Annex XIII: Criteria for Classification of GPAI Models with Systemic Risk
Article 113: Entry into Force and Application Dates
Related annexes
- Annex XI — GPAI technical documentation (Section 2 triggered by designation)
- Annex XIII — Criteria for systemic-risk classification
Frequently asked questions
Who decides if a model has systemic risk?
Either the provider self-notifies upon meeting Article 51 criteria (especially the compute threshold in Annex XIII), or the AI Office designates on its own initiative. The provider can present arguments in both cases.
Can designation be reversed?
Yes. If a model no longer meets the criteria (e.g. after significant modification or new evidence), the classification can be reviewed and potentially withdrawn.
Is the list of systemic-risk models public?
Yes. The AI Office publishes and maintains the list so downstream providers and deployers can identify which upstream models carry systemic-risk status.
What happens immediately after designation?
The provider must comply with Article 55 additional obligations (model evaluation, adversarial testing, incident reporting, cybersecurity) and produce Annex XI Section 2 documentation, in addition to the Article 53 baseline duties.