Chapter XIII — Final ProvisionsArticle 112

Article 112: Evaluation and Review

Applies from 2 Aug 20269 min readEUR-Lex verified Apr 2026

Article 112 requires the European Commission to periodically evaluate and review the AI Act — at least every four years, with the first evaluation by 2 August 2029. The Commission must assess: the need to amend the Annex III high-risk list, the effectiveness of the transparency obligations (Article 50), the functioning of the governance and enforcement framework, and whether national authorities have sufficient resources. The Commission submits evaluation reports to the European Parliament and Council, potentially accompanied by legislative proposals for amendment. This article is the mechanism through which the AI Act evolves over time. Always verify on EUR-Lex.

Who does this apply to?

  • -The European Commission responsible for conducting periodic evaluations and submitting reports
  • -The European Parliament and Council receiving evaluation reports and acting on any proposals for amendment
  • -Stakeholders (industry, civil society, national authorities) providing input on the effectiveness of Annex III classifications, transparency rules, and enforcement

Scenarios

By 2029, AI-powered mental health chatbots have become widely deployed across the EU, providing therapy-like interactions to millions of users. Several incidents of harm are reported (inappropriate advice, failure to detect suicidal ideation). Civil society organisations petition the Commission to add AI-based mental health tools to the Annex III high-risk list.

Under Article 112, the Commission's first evaluation (due by 2 August 2029) must assess whether the Annex III high-risk list needs updating. The mental health chatbot scenario would be a concrete example: the Commission evaluates the evidence of harm, the scale of deployment, the severity of risks to fundamental rights and health, and whether existing transparency obligations (Article 50) are sufficient. If the Commission concludes that amendment is needed, it submits a proposal to the Parliament and Council to add this use case to Annex III.
Ref. Art. 112, Art. 7, Annex III

Three years after full application, several Member States report that their national AI market surveillance authorities lack sufficient technical expertise and funding to effectively enforce the AI Act, leading to uneven enforcement across the single market.

Article 112 requires the Commission to evaluate the effectiveness of the enforcement framework and whether national authorities have adequate resources. The Commission's evaluation report would document the resource gaps, enforcement disparities, and their impact on the internal market. The report could recommend increased EU funding for national authorities, propose the creation of shared EU-level technical assessment capacity, or suggest amendments to strengthen the enforcement provisions.
Ref. Art. 112, Art. 64

What Article 112 does (in plain terms)

Article 112 is the AI Act's built-in review mechanism. Recognising that AI technology evolves rapidly and that the Regulation's risk classifications and rules may need adjustment over time, the legislator requires the Commission to systematically evaluate the Act's operation and propose amendments if necessary.

Key evaluation areas include:

1. Annex III high-risk list — Does the list of high-risk AI use cases (Article 7 / Annex III) still reflect the actual risk landscape? Should use cases be added, removed, or refined as technology and deployment patterns evolve?

2. Transparency obligations (Article 50) — Are the transparency requirements for AI systems that interact with people, generate synthetic content, or are used for emotion recognition or biometric categorisation effective? Do they need strengthening or clarification?

3. Governance structure — Is the institutional framework (AI Office, AI Board, national authorities) functioning effectively? Are coordination mechanisms between EU and national levels working?

4. Enforcement effectiveness — Are national market surveillance authorities able to enforce the AI Act in practice? Do they have sufficient resources, expertise, and legal tools?

5. Resources for national authorities — The Commission must specifically assess whether national authorities need additional resources (financial, technical, human) to fulfil their enforcement obligations.

The first evaluation must be completed and submitted to the European Parliament and Council by 2 August 2029. Subsequent evaluations must occur at least every four years thereafter.

This is not merely a reporting exercise — the Commission may accompany evaluation reports with legislative proposals to amend the Regulation, making Article 112 the pathway through which the AI Act adapts to new developments.

How Article 112 connects to the rest of the Act

  • Article 7Amendments to Annex III: Article 7 gives the Commission the power to amend the Annex III high-risk list through delegated acts. Article 112 establishes the evaluation framework that informs such amendments.
  • Annex IIIHigh-risk AI systems: the list of high-risk use cases is a primary subject of the Article 112 evaluation. The Commission assesses whether it needs updating.
  • Article 50Transparency obligations: the effectiveness of transparency rules for certain AI systems (chatbots, deepfakes, emotion recognition, biometric categorisation) is a specific evaluation topic.
  • Article 64Market surveillance: the Commission evaluates the functioning of the entire market surveillance and enforcement framework.
  • Article 113Entry into force and application dates: the 2 August 2029 deadline for the first evaluation is anchored in the Act's timeline.
  • Article 7 and Article 6 — The evaluation may lead to re-calibration of the high-risk classification system.

Why Article 112 matters for stakeholders

Article 112 is important even though it does not impose direct compliance obligations on providers or deployers. It matters because:

For industry: - The evaluation cycle creates regulatory predictability: stakeholders know that the high-risk list and transparency rules will be systematically reviewed on a defined schedule. - It opens structured opportunities for input: industry associations, companies, and technical experts can provide evidence and views during the evaluation process. - Companies should anticipate changes: if your AI use case is not currently on the Annex III high-risk list but is generating public concern, Article 112 is the mechanism through which it could be added. Plan accordingly.

For civil society: - Article 112 is the primary channel for advocating that new high-risk categories be added or existing rules be strengthened based on real-world evidence of harm. - The evaluation must cover enforcement effectiveness — civil society can highlight enforcement gaps and under-resourcing.

For national authorities: - The evaluation addresses whether national authorities have sufficient resources — this is the opportunity to document resource needs and gaps. - The Commission's enforcement effectiveness assessment can lead to structural improvements in the market surveillance framework.

Timeline and process

First evaluation cycle: - 2 August 2029 — Deadline for the Commission to submit its first evaluation report to the European Parliament and Council. - The evaluation will cover the first three years of full application (August 2026 – August 2029) of most AI Act provisions. - Expect the Commission to begin preparatory work (consultations, studies, data collection) well before the deadline — likely starting in 2027–2028.

Subsequent cycles: - At least every four years thereafter (i.e., by approximately August 2033, August 2037, etc.). - Each cycle assesses the same core topics (Annex III, Article 50, governance, enforcement) plus any emerging issues.

Process: - The Commission will likely issue a public consultation, commission external studies, and consult the AI Board and national authorities. - The evaluation report is submitted to Parliament and Council with findings and, where appropriate, legislative proposals for amendment. - If amendments are proposed, they follow the ordinary legislative procedure (unless delegated act powers under Article 7 apply for Annex III changes).

Compliance checklist

  • Monitor the Commission's evaluation timeline: preparatory consultations may begin as early as 2027–2028 ahead of the August 2029 deadline.
  • Track whether your AI use case is under discussion for potential addition to the Annex III high-risk list — participate in consultations to provide evidence.
  • If you operate in a sector not currently classified as high-risk, assess the likelihood that Article 112 evaluations could change this classification and prepare contingency compliance plans.
  • For national authorities: document enforcement resource needs and gaps to feed into the Commission's evaluation of whether additional resources are needed.
  • For industry associations: prepare position papers on the effectiveness of current classifications and transparency rules to contribute to the evaluation process.
  • Review each evaluation report when published — it signals the regulatory direction for the next cycle and may include legislative proposals affecting your obligations.

Prepare for evolving AI Act requirements — start the free assessment.

Start Free Assessment

Related annexes

  • Annex III — High-risk AI systems (subject of Article 112 evaluation for potential updates)

Frequently asked questions

Can Article 112 lead to the AI Act being weakened or requirements reduced?

In principle, the evaluation could lead to amendments in either direction — strengthening or relaxing requirements. The Commission could propose removing use cases from the high-risk list if evidence shows the risk was lower than anticipated, or adding new use cases if new risks emerge. However, given the political dynamics around AI regulation, most observers expect evaluations to result in refinements and additions rather than wholesale weakening.

How can stakeholders influence the Article 112 evaluation?

The Commission typically conducts public consultations as part of evaluation exercises. Industry, civil society, academia, and national authorities can submit evidence, position papers, and recommendations. Participating in these consultations — and providing data on real-world AI deployment outcomes — is the most direct way to influence the evaluation's conclusions and any resulting legislative proposals.

Is Article 112 the only mechanism for updating the high-risk list?

No. Article 7 gives the Commission the power to amend the Annex III high-risk list through delegated acts without waiting for the Article 112 evaluation cycle. Article 112 provides the systematic, periodic review framework, while Article 7 allows targeted updates in response to emerging evidence at any time. In practice, significant changes to the high-risk list are likely to be informed by Article 112 evaluations.