EU AI Act Recitals Overview
The ~180 recitals of the AI Act explain legislative intent and aid interpretation. Grouped here by theme for quick reference.
Binding text only on EUR-Lex.
These recitals explain why the AI Act exists, its legal basis (Art. 114 TFEU), the objective of promoting trustworthy AI while protecting fundamental rights, and how the Regulation fits alongside existing Union law (GDPR, product safety, non-discrimination). They clarify the definitions of AI system, provider, deployer, and the territorial scope including third-country providers whose AI output is used in the Union.
These recitals provide the rationale for each prohibition in Article 5: why subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing (solely profiling-based), untargeted facial scraping, emotion recognition in workplace/education, and real-time remote biometric identification in public spaces are considered incompatible with Union values. They also explain the narrow law-enforcement exceptions and the safeguards attached to them.
These recitals explain the two-pathway classification system: product-based (Annexes I/II) and stand-alone (Annex III). They justify why specific areas (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) are considered high-risk. Recital 53 introduces the Article 6(3) exception allowing providers to argue their Annex III system is not high-risk if it does not pose a significant risk. Recitals also address the Commission's power to update Annex III.
These recitals flesh out the substantive requirements of Articles 8-15: risk management methodology, data governance (representative, relevant, and sufficiently error-free datasets), technical documentation standards, automatic logging for traceability, transparency and instructions for use, human oversight mechanisms (including stop/override functions), and accuracy, robustness, and cybersecurity benchmarks. They emphasise that requirements must be proportionate to the intended purpose and state of the art.
These recitals explain the allocation of duties among providers, deployers, importers, distributors, and authorised representatives. They clarify when a deployer becomes a provider (e.g., by substantially modifying a system or putting it on the market under their own name). Recitals cover the QMS requirement (Article 17), documentation keeping (Article 18), log retention (Article 19), corrective actions (Article 20), cooperation with authorities (Article 21), and the deployer's specific duties including FRIA and use monitoring.
These recitals explain harmonised standards (Article 40), common specifications (Article 41), the presumption of conformity mechanism, the conformity assessment procedures (internal control vs. third-party under Article 43), the role of notified bodies and their independence requirements, the EU declaration of conformity (Article 47), CE marking (Article 48), EU database registration (Article 49), and the integration with existing product-law conformity processes (Article 8(2)).
These recitals cover the transparency requirements for certain AI systems (Article 50: deepfakes, emotion recognition, biometric categorisation, chatbots) and the comprehensive GPAI framework (Articles 51-56). They explain the tiered approach: all GPAI models must meet transparency and documentation obligations, while models with systemic risk face additional evaluation, adversarial testing, incident reporting, and cybersecurity duties. The codes of practice mechanism (Article 56) and the role of the AI Office in GPAI enforcement are covered.
These recitals explain the regulatory sandbox framework (Articles 57-61), the rationale for supporting innovation while maintaining safeguards, the personal data processing rules within sandboxes (Article 59), real-world testing requirements (Article 60), informed consent (Article 61), and the SME/start-up support measures (Articles 62-63) including reduced fees and priority sandbox access.
These recitals explain the multi-layered governance: the AI Board (Article 64) as advisory/coordination body, the advisory forum (Article 66) for stakeholder input, the scientific panel of independent experts (Article 67) for GPAI technical advice, the AI Office (Article 69) within the Commission as the GPAI enforcement arm, and the designation of national competent authorities (Article 70). They emphasise the need for adequate resources and technical expertise at national level.
These recitals cover post-market monitoring (Article 72), serious incident reporting (Article 73), market surveillance powers (Article 74), mutual assistance (Article 75), the national enforcement procedure for risky AI (Article 79), Union safeguard (Article 81), formal non-compliance (Article 83), the right to explanation (Article 85), complaint rights (Article 86), whistleblower protection (Article 87), and the penalty framework (Articles 99-100) including the fine tiers of EUR 35M/7% for prohibited practices, EUR 15M/3% for other violations, and EUR 7.5M/1% for incorrect information.
These recitals explain the phased application dates (Article 113), transitional provisions for existing systems (Article 111), the grandfathering clause for Annex X large-scale IT systems (Article 110), the Commission's evaluation duty (Article 112), the delegated acts mechanism (Article 97), and the relationship with other Union acts including the GDPR, the Digital Services Act, and sector-specific legislation.