Article 6: Classification Rules for High-Risk Systems
Article 6 is the classification gate for high-risk AI: it sets three main routes—(1) safety components / products under Annex I Union harmonisation law with third-party conformity assessment; (2) systems listed in Annex III; and (3) the Article 6(3) carve-out where an Annex III system does not pose a significant risk and does not materially influence outcomes, subject to strict limits (profiling always high-risk). Article 6(4) requires documentation and Article 49(2) registration when a provider treats an Annex III system as non-high-risk. Confirm application dates under Article 113 and transitional Article 111 on EUR-Lex.
Who does this apply to?
- -Providers placing AI systems on the EU market or putting them into service in the Union
- -Deployers that modify systems or put high-risk systems into service in the Union
- -Product manufacturers when the AI system is a safety component or the product itself requires third-party conformity assessment under listed sector legislation
Scenarios
A vendor sells biometric identification software to banks for customer onboarding at branches.
A narrow AI spellchecker inside a word processor with no Annex III purpose.
A CV-screening tool used only to rank internal applicants for promotions in one company, with no substantial harm and limited scope.
What Article 6 does (in plain terms)
Route A — Article 6(1) + Annex I: Where an AI system is a safety component of a product (or is the product) covered by Annex I Union harmonisation legislation and that product must undergo third-party conformity assessment before being placed on the market or put into service, the AI system is high-risk when both limbs (a) and (b) of paragraph 1 are met.
Route B — Article 6(2) + Annex III: AI systems referred to in Annex III are high-risk in addition to paragraph 1 systems—map your intended purpose to the Annex III categories in the Annex III guide.
Route C — Article 6(3) carve-out: An Annex III system is not high-risk if it does not pose a significant risk to health, safety, or fundamental rights and does not materially influence decision outcomes, only when one of the second-subparagraph conditions (a)–(d) applies—but profiling always keeps the system high-risk (third subparagraph). Providers self-assessing out of high-risk must document the assessment and comply with Article 49(2) registration (Article 6(4)).
Delegated adjustments: Articles 6(6)–(8) and Article 7 allow the Commission to refine lists and conditions via delegated acts—track EUR-Lex for updates.
How Article 6 connects to the rest of the Act
- Article 1 — High-risk classification is part of the harmonised rule areas in Article 1(2)(c).
- Article 2 — Scope determines whether Article 6 tests apply to your actors and deployments.
- Article 3 — Definitions such as provider, deployer, intended purpose, and safety component anchor classification memos.
- Article 4 — AI literacy supports teams interpreting Article 6 and Annex III.
- Article 5 — Prohibited practices are a separate gate: if caught by Article 5, Article 6 classification is not a workaround.
- Article 7 — Commission delegated acts to amend Annex III use cases where risk is equivalent or greater.
- Annex III — Operational list for Article 6(2).
- Annex I — Product law hooks for Article 6(1).
- Article 49 — Registration when relying on Article 6(3) as non-high-risk (Article 6(4)).
- Articles 8–15 — Chapter III requirements once a system is high-risk.
- Article 97 — Empowerment for delegated acts referenced in Article 6(6)–(7).
- Article 113 — Application dates (including Article 6(1) timing referenced in the Act’s staged calendar—verify on EUR-Lex for your pathway).
Article 6(3) exceptions (narrow)
Caution: Article 6(3) provides that an Annex III AI system is not high-risk if it does not pose a significant risk and does not materially influence decision outcomes, and one of the listed narrow purposes applies (a)–(d) in the authentic text.
These carve-outs do not apply where the system performs profiling of natural persons (Article 6(3), third subparagraph). Special-category personal data and profiling under Union law can also block reliance on the carve-out—see the complete text in the official wording section below. Always document the legal position before market release.
What changes when you are high-risk?
High-risk classification unlocks Chapter III (Articles 8–15), provider and deployer duties (Articles 16–27), conformity assessment, CE marking, EU database registration under Article 49, and post-market monitoring (e.g. Article 72), among others. Use the assessment tool to build a task list for your chain.
Official wording: Article 6 — Classification rules for high-risk AI systems
The following reproduces Article 6 in full from the English consolidated text of Regulation (EU) 2024/1689.
1. Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.
3. By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.
The first subparagraph shall apply where any of the following conditions is fulfilled:
(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.
4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.\n\n5. The Commission shall, after consulting the European Artificial Intelligence Board (the \u2018Board\u2019), and no later than 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.\n\n6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III, but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.\n\n7. The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by this Regulation.\n\n8. Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance with paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and fundamental rights provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and take account of market and technological developments.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex contextualise risk classification, Annex III policy choices, and proportionality behind Article 6(3). Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Confirm the intended purpose and deployment context of the AI system in plain language.
- Check Annex III line by line; record which category (if any) matches your use case.
- If you rely on Article 6(3), document the legal basis and why profiling or special-category data exclusions do not block the exception.
- If the system is embedded in a regulated product, verify Annex I alignment and third-party conformity assessment triggers.
- Update classification when the purpose, user population, or outputs materially change.
Map your system to the Annex III categories in minutes with our free assessment.
Start Free AssessmentRelated Articles
Article 1: Subject matter
Article 2: Scope
Article 3: Definitions
Article 4: AI literacy
Article 5: Prohibited AI Practices
Article 7: Amendments to Annex III
Annex III: High-Risk AI System Areas
Annex I: Union Harmonisation Legislation — Product Law List
Article 9: Risk Management System
Article 49: EU Database Registration
Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems
Article 72: Post-Market Monitoring
Article 97: Exercise of the Delegation
Article 113: Entry into Force and Application Dates
Related annexes
- Annex III — High-risk AI system areas
- Annex I — Union harmonisation legislation
Frequently asked questions
Is ChatGPT high-risk?
The base general-purpose model is regulated under Chapter V (GPAI), not automatically as every downstream chatbot. A high-risk determination depends on how the system is placed on the market or used—for example if a customised deployment falls in Annex III.
Can we self-declare as non-high-risk?
You must apply the Article 6 tests honestly. Market surveillance authorities can challenge classification, including where a provider circumvents requirements through mislabelling.
Does open-source affect Article 6?
Open-source status can matter for GPAI under Article 53, but it does not automatically remove Annex III high-risk status for a finished system placed on the market for a high-risk purpose.