Alle Artikel
Is My AI System High-Risk Under the EU AI Act?
AI Act

Is My AI System High-Risk Under the EU AI Act?

Step-by-step guide to classifying AI systems as high-risk under the EU AI Act. Covers Article 6, Annex III domains, exceptions, and compliance.

Legalithm Team17 Min. Lesezeit
Teilen

Is My AI System High-Risk? A Practical Classification Guide Under the EU AI Act

TL;DR

  • The EU AI Act classifies AI systems as high-risk through two pathways: safety components of regulated products (Annex I) and standalone use cases in eight sensitive domains (Annex III).
  • Annex III covers eight domains: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.
  • The Article 6(3) exception can exempt Annex III systems that perform narrow procedural tasks, improve completed human work, or pose no significant risk — but not if the system profiles individuals.
  • High-risk classification triggers the full compliance stack: risk management, technical documentation, conformity assessment, human oversight, logging, post-market monitoring, and EU database registration.
  • The compliance deadline for Annex III systems is 2 August 2026; Annex I product systems have until 2 August 2027.
  • The European Commission's detailed classification guidance remains pending as of early 2026, creating uncertainty that organisations must navigate with conservative interpretations.
  • Getting classification wrong in either direction is costly: under-classification exposes you to fines up to EUR 15 million or 3% of global turnover; over-classification wastes compliance resources.

This is the single most important question for any AI provider or deployer under the EU AI Act: does your system qualify as high-risk?

The answer determines whether you face the full compliance burden — technical documentation, risk management, conformity assessment, human oversight, logging, and post-market monitoring — or a much lighter set of obligations. For most organisations, the difference is hundreds of thousands of euros in compliance cost and months of preparation time.

This guide walks through the classification logic as set out in Article 6 and Annex III, including the exceptions most teams overlook, real-world examples for each domain, and a practical decision process you can follow today.

The two pathways to high-risk classification

The AI Act defines high-risk through two distinct pathways. Your system is high-risk if it falls into either one.

Pathway 1: Safety component of a regulated product (Article 6(1), Annex I)

Your AI system is high-risk if:

  • It is a safety component of a product covered by existing EU harmonisation legislation listed in Annex I (Section A), and
  • That product requires a third-party conformity assessment under the relevant legislation.

This pathway captures AI embedded in products that already have sectoral safety regulation. The logic is straightforward: if the product is dangerous enough to need third-party assessment, the AI inside it is dangerous enough to be high-risk.

Products in scope include:

SectorExamples of AI safety components
Medical devices & IVDsAI-powered diagnostic imaging, AI triage in clinical software
MachineryAI controlling robotic arms, autonomous mobile robots in warehouses
ToysAI-powered interactive toys that adapt behaviour to children
Radio equipmentAI managing spectrum allocation or interference avoidance
Civil aviationAI flight management systems, autonomous drone navigation
Motor vehiclesAdvanced driver assistance systems, autonomous driving modules
Marine equipmentAI navigation assistance, collision avoidance
LiftsAI-controlled elevator dispatching with safety functions
Rail systemsAI signalling, automated train operation components

Real-world example — medical device manufacturer: A company develops an AI algorithm that analyses retinal scans to detect diabetic retinopathy. The algorithm is integrated into a Class IIa medical device. Because the medical device requires third-party conformity assessment under the Medical Devices Regulation (MDR), the AI component is automatically classified as high-risk under Pathway 1. The company must comply with both the MDR and the AI Act's high-risk requirements — but it benefits from a later deadline.

Real-world example — automotive supplier: An automotive tier-1 supplier develops an AI-based pedestrian detection module for integration into vehicle ADAS systems. The module is a safety component of the vehicle, and the vehicle requires type approval. The AI module is high-risk under Pathway 1.

Timeline note: AI systems in this pathway have a later deadline — 2 August 2027 — aligning with existing sectoral timelines.

Pathway 2: Standalone high-risk use case (Article 6(2), Annex III)

Your AI system is high-risk if it falls into one of the eight domain areas listed in Annex III. These are standalone AI systems — not embedded in other regulated products — that the EU legislature determined pose significant risks to fundamental rights, safety, or both.

Timeline: Full compliance required by 2 August 2026. This is the deadline most organisations are working toward right now.

Domain 1: Biometrics

  • Remote biometric identification (not real-time, which is mostly prohibited)
  • Biometric categorisation based on sensitive attributes
  • Emotion recognition systems

Example: A building security company deploys a facial recognition system to verify employee identities at access points. This is a remote biometric identification system — high-risk.

Domain 2: Critical infrastructure

  • AI managing safety of road traffic, water, gas, heating, or electricity supply
  • AI in digital infrastructure operations

Example: An energy utility uses an AI system to balance electricity grid loads and manage demand response. The AI makes operational decisions that affect electricity supply continuity. High-risk.

Domain 3: Education and vocational training

  • AI determining access to or admission into educational institutions
  • AI evaluating learning outcomes or directing learning processes
  • AI monitoring behaviour during examinations (proctoring)

Example: A university deploys an AI-powered proctoring system that monitors student behaviour during online exams, flagging suspicious activity. This falls squarely into Domain 3. The university is the deployer, the proctoring vendor is the provider.

Domain 4: Employment, workers management, access to self-employment

  • AI screening or filtering job applications (CV scanners)
  • AI publishing targeted job advertisements
  • AI making decisions on promotion, termination, task allocation, or performance-based contracts
  • AI monitoring or evaluating worker performance

Example: A multinational retailer uses an AI-powered scheduling system that allocates shifts based on employee performance metrics and predicted store traffic. Because the system influences task allocation based on performance evaluation, it is high-risk.

Example: A recruitment platform offers an AI tool that ranks CVs against job descriptions and assigns compatibility scores. Every employer using this tool is a deployer of a high-risk system; the platform is the provider.

Domain 5: Access to essential private services and public services

  • AI in creditworthiness assessment and credit scoring
  • AI evaluating eligibility for public assistance benefits
  • AI assessing insurance risk (life and health)
  • AI evaluating emergency service dispatch priority

Example: A fintech company develops an AI model that analyses alternative data (social media activity, mobile phone usage patterns) to generate credit scores for consumers who lack traditional credit histories. This is a creditworthiness assessment system — high-risk.

Example: A municipal government deploys an AI system to prioritise social housing applications. The system scores applicants based on multiple criteria and produces a ranked list. This is an AI system evaluating eligibility for public services — high-risk.

Domain 6: Law enforcement

  • AI assessing risk of criminal offending (risk profiling)
  • AI used as polygraphs or for emotion detection in investigations
  • AI evaluating evidence reliability
  • AI used for facial recognition or profiling in investigations

Example: A police force uses an AI system that analyses historical crime data to predict areas with higher probability of future offences and deploys patrols accordingly. This is predictive policing — high-risk under Domain 6.

Domain 7: Migration, asylum, and border control

  • AI assessing asylum or visa applications
  • AI used in border risk assessment or security screening
  • AI used to detect, recognise, or identify persons in migration contexts

Example: A national border agency uses an AI system to assess the credibility of asylum seekers' claims by analysing textual and verbal responses. High-risk.

Domain 8: Administration of justice and democratic processes

  • AI assisting judicial decisions on facts or law
  • AI used in alternative dispute resolution
  • AI influencing elections or referendum outcomes

Example: A legal technology company offers an AI tool that predicts case outcomes based on past judicial decisions and recommends litigation strategies. If judges use this tool to assist their decisions, it is high-risk. If only lawyers use it for strategy (not directly assisting a judicial decision), the classification is less clear — this is where the Article 6(3) exception analysis becomes critical.

The Article 6(3) exception: not every Annex III system is high-risk

This is the clause most teams either miss entirely or misapply. Even if your AI system falls into one of the eight Annex III domains, it is not classified as high-risk if it meets any one of these conditions:

  1. Narrow procedural task: The system performs a purely procedural function — e.g., converting unstructured data into structured data, classifying incoming documents into categories for routing, or extracting specific fields from standardised forms.

  2. Improving completed human work: The system is intended to improve the result of a previously completed human activity — e.g., a grammar checker reviewing a human-drafted judicial ruling, or a spell-check and formatting tool applied to a completed asylum assessment.

  3. Pattern detection without influence: The system detects decision-making patterns or deviations from prior patterns without replacing or influencing the human assessment — e.g., an analytics dashboard that highlights anomalies in hiring decisions for HR managers to review, without recommending changes.

  4. Preparatory task: The system performs a preparatory task for an assessment relevant to an Annex III use case — e.g., a legal research tool that retrieves and organises relevant case law for a judge to review, without summarising or recommending outcomes.

  5. No significant risk of harm: The system does not pose a significant risk of harm to health, safety, or fundamental rights, taking into account the severity and probability of possible harm.

When the exception applies — examples

  • Document classification for a court: An AI system that sorts incoming court filings by case type and routes them to the appropriate department. This is a narrow procedural task (condition 1) that does not influence judicial decisions.

  • Grammar checker for public benefit assessments: A tool that reviews the language and formatting of completed benefit determination letters. It improves completed human work (condition 2) without altering the substantive decision.

  • HR analytics dashboard: A system that shows managers aggregate statistics about their hiring patterns compared to company benchmarks. It detects patterns (condition 3) without recommending specific hiring decisions.

When the exception does NOT apply — examples

  • CV scoring tool: Even though it could be framed as "preparatory," a CV scorer that assigns rankings directly influences the shortlisting decision. Condition 4 does not protect it because the output is a substantive input to the hiring decision, not mere preparation.

  • Predictive credit scoring: A model that outputs a credit score used in lending decisions is not performing a narrow procedural task. The score directly influences whether someone receives credit.

  • Any system that profiles individuals: This is the critical caveat. The Article 6(3) exception does not apply if the AI system performs profiling of natural persons within the meaning of Article 4(4) GDPR. Any AI system that profiles people and falls within an Annex III domain is high-risk regardless of these exceptions.

Claiming the exception

To rely on the Article 6(3) exception, you must:

  1. Document your analysis — why the system meets at least one of the five conditions.
  2. Confirm the system does not perform profiling.
  3. Notify the relevant market surveillance authority before placing the system on the market or putting it into service.
  4. Retain the documentation and be prepared to defend the classification if challenged.

This is not a self-declaration you can file and forget. Authorities can review and reject your assessment, which would retroactively subject you to the full high-risk obligations.

Decision flowchart: classifying your AI system step-by-step

Follow this process for each AI system in your AI systems inventory:

  1. Is the AI system a prohibited practice under Article 5? → Yes = the system cannot be deployed at all. Stop.
  2. Is the AI system a safety component of an Annex I product requiring third-party conformity assessment? → Yes = high-risk under Pathway 1. Compliance deadline: 2 August 2027.
  3. Does the AI system fall into one of the eight Annex III domain areas? → No = not high-risk. Check whether transparency obligations apply (limited-risk systems).
  4. Does the system perform profiling of natural persons (GDPR Article 4(4))? → Yes = high-risk. The Article 6(3) exception cannot apply.
  5. Does the Article 6(3) exception apply? Assess whether the system meets any of the five conditions (narrow procedural task, improves completed human work, detects patterns without influencing decisions, preparatory task, or no significant risk of harm). → If yes = not high-risk, but you must document your reasoning and notify the relevant authority.
  6. If no exception applies → high-risk under Pathway 2. Full compliance required by 2 August 2026.

Common classification mistakes

Mistake 1: Assuming third-party AI tools are not your problem

If you integrate an AI system into your product or service and it falls into an Annex III domain, you may be reclassified as a provider under Article 25 — even if someone else built the model. This happens when you put the AI system on the market under your own name, make a substantial modification, or change its intended purpose. See the full analysis in our provider vs deployer guide.

Mistake 2: Believing human-in-the-loop eliminates high-risk status

The AI Act specifically includes systems that assist human decisions. An AI system that ranks candidates, scores creditworthiness, or recommends sentences is high-risk — even if a human makes the final call. The presence of human oversight is an obligation within the high-risk category, not a way to avoid it.

Mistake 3: Relying on company size for exemption

There is no blanket size exemption. SMEs and startups benefit from some procedural simplifications (simplified technical documentation, reduced fees, access to regulatory sandboxes), but the substantive obligations are identical. A two-person startup selling an AI hiring tool faces the same classification as a multinational HR technology platform.

Mistake 4: Ignoring extraterritorial scope

If the AI system's output is used with respect to persons in the EU — even if your company, servers, and the system itself are outside the EU — the AI Act applies. A US-based company selling an AI credit scoring tool to European banks must classify the system under the AI Act. This mirrors the extraterritorial approach of the GDPR, which caught many non-EU companies off guard. For the overlap between both regulations, see our AI Act vs GDPR comparison.

Mistake 5: Applying the Article 6(3) exception without documenting it

Some organisations informally decide their system is "just a tool" without performing a structured analysis. If an authority later disagrees, the organisation faces retroactive classification as high-risk with no documentation trail — the worst possible compliance position. The exception requires affirmative documentation and notification.

Mistake 6: Confusing general-purpose AI models with high-risk AI systems

A general-purpose AI model (like a large language model) is not automatically high-risk. The model becomes potentially high-risk when it is integrated into a specific application that falls into an Annex III domain. The provider of the GPAI model has separate obligations under Articles 51-55; the provider of the downstream application bears the high-risk classification responsibility.

Mistake 7: Waiting for the Commission's guidance before classifying

The European Commission is expected to publish detailed guidelines on the practical implementation of the classification framework, including specific examples and edge cases. As of April 2026, this guidance has not yet been finalised. Waiting is not a viable strategy — the 2 August 2026 deadline is firm, and the legal text is sufficiently clear to begin classification now. Organisations that delay classification will not have time to complete the downstream compliance steps.

What happens once classified as high-risk

Classification is not the end — it is the starting point for a structured compliance programme. As a high-risk AI provider, you must implement the following:

ObligationArticleWhat it requires
Risk management systemArt. 9Continuous, iterative process to identify, analyse, evaluate, and mitigate risks throughout the AI system's lifecycle
Data governanceArt. 10Ensure training, validation, and testing datasets are relevant, representative, free of errors, and complete; examine for biases
Technical documentationArt. 11Prepare comprehensive documentation per Annex IV before placing the system on the market
Automatic loggingArt. 12Build in automatic recording of events relevant to identifying risks and substantial modifications
TransparencyArt. 13Provide clear instructions for use to deployers, including intended purpose, performance levels, limitations, and human oversight measures
Human oversightArt. 14Design the system so it can be effectively overseen by natural persons; specify oversight measures in instructions for use
Accuracy, robustness, cybersecurityArt. 15Achieve and maintain appropriate levels of accuracy, robustness, and cybersecurity throughout the lifecycle
Quality management systemArt. 17Establish a documented QMS covering compliance strategy, design and development, testing, data management, risk management, post-market monitoring, and incident reporting
Conformity assessmentArt. 43Complete before market placement — self-assessment or notified body, depending on the system type
CE markingArt. 48Affix CE marking after successful conformity assessment
EU database registrationArt. 49Register the system in the EU database before placing it on the market
Post-market monitoringArt. 72Establish a proportionate, documented monitoring plan to collect and analyse data on system performance in real-world conditions

Deployers of high-risk systems have a parallel but lighter set of obligations, centred on human oversight, log retention, transparency to affected persons, and (in some cases) fundamental rights impact assessments.

See the full EU AI Act compliance checklist for the step-by-step action plan covering both roles.

Practical timeline: what to do now

If you have not yet classified your AI systems, you are behind — but not too late. Here is a realistic timeline for the remaining months before 2 August 2026:

  1. Now (April 2026): Complete your AI systems inventory. List every AI system your organisation develops, deploys, or distributes.
  2. April–May 2026: Run each system through the classification flowchart above. Document your reasoning for every system, including those you determine are not high-risk.
  3. May–June 2026: For each high-risk system, assign ownership and begin technical documentation and risk management.
  4. June–July 2026: Complete conformity assessment. For most Annex III systems, this is a self-assessment — but it must be thorough and documented.
  5. July 2026: Register in the EU database, affix CE marking, and establish post-market monitoring processes.
  6. 2 August 2026: Deadline. Systems must be compliant.

Frequently asked questions

Does the AI Act apply to AI systems developed outside the EU?

Yes. The AI Act applies to any provider placing an AI system on the EU market or putting it into service in the EU, regardless of where the provider is established. It also applies to providers and deployers located outside the EU if the output of their AI system is used in the EU. This extraterritorial scope mirrors the GDPR's approach and means non-EU companies cannot avoid classification by hosting systems outside Europe.

Can an AI system change from non-high-risk to high-risk?

Yes. If you modify an AI system's intended purpose so that it falls into an Annex III domain, or if you make a substantial modification to a high-risk system, the classification can change. Under Article 25, such modifications can also change your role from deployer to provider. Classification is not a one-time assessment — it must be revisited whenever the system's purpose, capabilities, or deployment context changes.

What if my system straddles two risk categories?

Apply the higher classification. If there is any reasonable argument that your system falls into an Annex III domain, and you cannot confidently demonstrate that the Article 6(3) exception applies, treat it as high-risk. The cost of under-classification (potential fines of up to EUR 15 million or 3% of turnover and market withdrawal) far exceeds the cost of over-classification (unnecessary compliance work).

Is a chatbot high-risk?

Not automatically. A general-purpose customer service chatbot is typically a limited-risk or minimal-risk system subject to transparency obligations (users must be told they are interacting with AI). However, if the chatbot is used in an Annex III context — e.g., screening job applicants, triaging emergency calls, or assisting judicial decisions — it becomes high-risk regardless of the underlying technology.

What is the difference between high-risk classification under the AI Act and a Data Protection Impact Assessment under the GDPR?

They serve different purposes but often apply to the same systems. A DPIA under GDPR Article 35 assesses risks to personal data processing. AI Act classification assesses whether the AI system falls into predefined high-risk categories based on its intended use. A system can require a DPIA without being high-risk under the AI Act (e.g., an AI system processing health data for research that does not fall into Annex III), and vice versa. In practice, many high-risk AI systems will require both. See our full comparison of the AI Act and GDPR.

Where can I get help classifying my system?

Start with the free AI Act risk classification assessment to get an initial classification for your specific use case. For complex or borderline cases — particularly those involving the Article 6(3) exception — consult qualified legal counsel with AI regulatory expertise. The complete AI Act guide provides article-by-article analysis of every provision referenced in this post.

Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final classification decisions should be reviewed by qualified legal counsel.

AI Act
High-Risk
Classification
Article 6
Annex III
Risk Assessment
Compliance

Prüfen Sie die Compliance Ihres KI-Systems

Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.

Kostenlose Bewertung starten