Alle Artikel
Free AI Act Risk Classification in 5 Minutes
AI Act

Free AI Act Risk Classification in 5 Minutes

Use this free AI Act risk classification tool to determine your AI system's risk tier, applicable obligations, and next compliance steps.

Legalithm Team20 Min. Lesezeit
Teilen

Free AI Act Risk Classification Assessment: Identify Your Obligations in 5 Minutes

Risk classification is the first decision every AI company must make under the EU AI Act. Until you know your risk tier, you cannot determine which obligations apply, what documentation you need, or how much effort and budget compliance requires.

The problem is that the AI Act's classification logic spans multiple articles, two annexes, and several cross-references to other EU legislation — making it difficult for non-specialists to reach a confident answer without legal support.

Legalithm's free AI Act risk classification assessment solves this first step. Describe your AI system in plain language, and the tool maps your use case to the regulation's risk framework, identifies your likely role (provider, deployer, or both), and outlines the obligations that apply — in minutes, not weeks.

This article explains how risk classification works under the AI Act, walks through the four tiers with concrete examples, describes what the assessment tells you, and shows what three different types of companies learned from going through the process.

TL;DR — AI Act risk classification essentials

  • The EU AI Act uses a four-tier risk framework: prohibited, high-risk, limited risk, and minimal risk.
  • Your risk tier determines your entire compliance programme — from no obligations (minimal risk) to a complete ban (prohibited).
  • Classification depends on the intended purpose of the AI system, not the underlying technology. The same model can be minimal-risk in one application and high-risk in another.
  • Prohibited practices have been enforceable since 2 February 2025. High-risk obligations apply from 2 August 2026.
  • Misclassifying your system — in either direction — carries real consequences: under-classification creates regulatory exposure; over-classification wastes compliance resources.
  • The free AI Act assessment provides a preliminary classification, applicable obligations by role, and next-step guidance in under five minutes.
  • After classification, the next steps are: build an AI systems inventory, follow the compliance checklist, and prepare technical documentation if you are high-risk.

Why risk classification is the essential first step

Every other compliance activity depends on your risk tier:

Risk tierWhat you must do
ProhibitedStop. The AI practice is banned in the EU. Redesign or withdraw the system.
High-riskFull compliance programme: risk management, technical documentation, conformity assessment, CE marking, registration, post-market monitoring.
Limited riskTransparency obligations: disclose AI interaction, label generated content, notify about emotion recognition or biometric categorisation.
Minimal riskNo specific obligations under the AI Act (GDPR and other regulations may still apply).

Without a clear classification, organisations either waste months building compliance infrastructure for a system that turns out to be minimal risk, or — worse — discover too late that their system is high-risk and face enforcement action with no documentation in place.

The classification decision also determines:

  • Budget allocation — High-risk compliance can require significant investment in documentation, testing, and monitoring. Minimal-risk systems need none of this.
  • Product roadmap — If a planned feature would push the system into high-risk territory, the product team needs to know before development starts.
  • Vendor due diligence — Deployers need to verify the risk tier of AI systems they procure from third-party providers.
  • Investor and customer confidence — Demonstrating you have classified your systems and understand your obligations is increasingly a due-diligence requirement.

Ist Ihr KI-System hochriskant?

Finden Sie es in 2 Minuten heraus — kostenlos, ohne Anmeldung.

Jetzt prüfen

The four risk tiers explained

Tier 1: Prohibited AI practices

Article 5 bans AI practices that the EU considers an unacceptable threat to fundamental rights. These have been enforceable since 2 February 2025.

Prohibited practices include:

PracticeDescriptionExample
Social scoringUsing AI to evaluate or classify people based on social behaviour, leading to detrimental treatmentA municipality ranks residents by civic behaviour and restricts park access for low-scoring individuals
Subliminal manipulationAI techniques that distort behaviour below the threshold of awareness, causing harmA gaming app uses imperceptible visual cues designed to increase in-app spending beyond the user's conscious intention
Exploitation of vulnerabilitiesTargeting people based on age, disability, or socio-economic situation to distort behaviourA payday loan app uses AI to target financially distressed individuals with manipulative offers
Untargeted facial image scrapingBuilding facial recognition databases by scraping images from the internet or CCTV without consentA startup scrapes social media profile photos to build a facial recognition search engine
Emotion recognition in workplaces/schoolsInferring emotions of employees or students (except for medical or safety reasons)An employer uses webcam-based emotion recognition during video calls to monitor employee engagement
Predictive policing (individual-level)AI that predicts individual criminal behaviour based solely on profilingPolice software flags individuals as likely to commit crimes based on neighbourhood and demographic data
Biometric categorisation by sensitive attributesUsing biometrics to classify people by race, religion, sexual orientation, or political opinionsAn airport system categorises travellers by ethnicity from facial features
Real-time remote biometric identification in public spacesLive facial recognition in public areas by law enforcement (narrow exceptions exist)City-wide live facial recognition cameras scanning all pedestrians

Penalty for prohibited practices: Up to EUR 35 million or 7% of global annual turnover, whichever is higher.

If the assessment indicates your system may involve a prohibited practice, the immediate action is to halt deployment in the EU and consult legal counsel on whether the system can be redesigned to fall outside the prohibition.

Tier 2: High-risk AI systems

High-risk classification triggers the most comprehensive set of obligations under the AI Act. There are two pathways:

Pathway A — Safety components (Article 6(1)): Your AI system is a safety component of (or is itself) a product covered by EU harmonised legislation listed in Annex I, and that product requires a third-party conformity assessment. This covers AI in medical devices, machinery, toys, marine equipment, civil aviation, motor vehicles, and more.

Pathway B — Annex III use cases (Article 6(2)): Your AI system falls into one of the eight high-risk areas listed in Annex III:

  1. Biometrics — Remote biometric identification (not real-time in public spaces, which is prohibited), biometric categorisation (not by sensitive attributes, which is prohibited), emotion recognition (not in workplaces/schools, which is prohibited).
  2. Critical infrastructure — AI managing electricity, gas, heating, water supply, digital infrastructure, or road traffic.
  3. Education — AI determining access to education, evaluating learning outcomes, monitoring prohibited behaviour during tests, or assessing appropriate education levels.
  4. Employment — AI for recruitment (CV filtering, interview evaluation), promotion/termination decisions, task allocation, or performance monitoring.
  5. Access to essential services — Credit scoring, insurance risk assessment, public benefit eligibility, emergency dispatch prioritisation.
  6. Law enforcement — Lie detection, evidence reliability assessment, risk of offending assessment, crime analytics (not individual predictive policing).
  7. Migration and border control — Risk assessment of travellers, examination of visa and asylum applications, document authentication.
  8. Administration of justice — AI assisting judicial decisions, alternative dispute resolution, case-law research that influences case outcomes.

Important exception: Article 6(3) provides that an AI system listed in Annex III is not high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons — for instance, if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision patterns without replacing human assessment, or performs a preparatory task. However, systems that profile individuals are always considered high-risk regardless of this exception.

Real-world example: A SaaS company provides an AI tool that analyses CVs and ranks candidates for recruiters. This is squarely within Annex III, area 4 (employment — recruitment). The system is high-risk. The provider must implement the full set of Chapter III obligations before placing it on the EU market.

Real-world example: A utility company uses an AI system to predict electricity demand and optimise grid load balancing. This falls under Annex III, area 2 (critical infrastructure — electricity supply management). High-risk classification applies, requiring a risk management system, technical documentation, and ongoing monitoring.

Real-world example: A legal-tech startup offers an AI tool that searches case law and highlights relevant precedents for lawyers. If the tool only performs a preparatory task (finding relevant cases) without replacing or materially influencing judicial assessment, it may qualify for the Article 6(3) exception and not be classified as high-risk. However, if the tool ranks cases by predicted relevance to the outcome or suggests how a judge should rule, it would be high-risk under Annex III, area 8.

For the full classification methodology, see Is my AI system high-risk?.

Tier 3: Limited risk (transparency obligations)

Article 50 imposes transparency obligations on certain AI systems regardless of whether they are high-risk:

  • AI systems interacting with people must disclose that the user is interacting with an AI system (e.g., chatbots, virtual assistants).
  • Emotion recognition and biometric categorisation systems (those not already prohibited) must notify the individuals subject to them.
  • AI-generated or manipulated content (deepfakes, synthetic audio/video/images) must be labelled as artificially generated or manipulated in a machine-readable format.
  • AI-generated text published to inform the public on matters of public interest must be labelled as AI-generated (unless it has undergone human editorial review).

Real-world example: A marketing agency uses AI to generate product photos that depict realistic-looking people who do not exist. Under Article 50, these images must be labelled as AI-generated. The agency must also ensure the metadata includes a machine-readable marker indicating artificial generation.

Tier 4: Minimal risk

AI systems that do not fall into any of the above categories have no specific obligations under the AI Act. This includes:

  • AI-powered spam filters
  • Inventory optimisation tools
  • Content recommendation engines (for entertainment, not essential services)
  • AI-assisted code-completion tools
  • Internal analytics dashboards
  • AI-enhanced search (non-biometric)

Providers of minimal-risk systems are encouraged to adopt voluntary codes of conduct and should remember that the GDPR, consumer protection law, and other regulations may still apply.

How the free assessment works

The Legalithm AI Act risk classification assessment follows a structured process:

Step 1: Describe your AI system

You provide a plain-language description of your AI system's intended purpose, the context in which it operates, and who it affects. No legal expertise is required — describe what the system does as you would explain it to a colleague.

Step 2: System analysis

The assessment analyses your description against the AI Act's classification criteria:

Step 3: Role determination

The assessment identifies your likely role under the AI Act:

  • Provider — you develop the AI system or put it on the market under your name.
  • Deployer — you use a third-party AI system under your own authority.
  • Both — you develop some systems and deploy others.

Your role determines which specific obligations apply. See provider vs. deployer obligations for the full breakdown.

Step 4: Results and next steps

You receive:

  • Risk classification — which tier your system falls into.
  • Applicable obligations — a tailored list of requirements based on your risk tier and role.
  • Enforcement timeline — which deadlines apply to your obligations.
  • Next-step guidance — concrete actions to take, with links to relevant resources.

The entire process takes under five minutes. Run the free AI Act assessment now.

What you learn from the assessment

If your system is high-risk

The assessment identifies that your system requires the full Chapter III compliance programme and outlines the key workstreams:

  1. Risk management system — design and implement an iterative risk assessment process (Article 9).
  2. Technical documentation — prepare the nine mandatory sections of Annex IV documentation.
  3. Data governance — ensure training, validation, and testing datasets meet quality standards (Article 10).
  4. Conformity assessment — determine whether self-assessment or a notified body is required.
  5. Post-market monitoring — establish continuous monitoring and incident-reporting processes.

If your system is limited risk

The assessment identifies the specific transparency obligations that apply and provides practical guidance on implementation — for example, how to implement chatbot disclosure notices that comply with Article 50.

If your system is minimal risk

The assessment confirms you have no specific AI Act obligations and suggests voluntary best practices, while reminding you that GDPR, sector-specific regulation, and AI literacy requirements (already in force) may still apply.

If your system may be prohibited

The assessment flags the relevant prohibition under Article 5 and recommends immediate review by legal counsel. It identifies whether the system might be redesigned to fall outside the prohibition (e.g., by adding meaningful human oversight to convert a banned fully automated practice into a permissible human-supervised process).

Who the assessment is for

Startup founders

You are building an AI product and need to understand your regulatory position before the next funding round. Investors — particularly European VCs and institutional investors — increasingly ask for AI Act compliance assessments during due diligence. The free assessment gives you a defensible first answer to "have you assessed your AI Act obligations?"

Product and engineering teams

You are designing new AI features and need to understand whether a planned feature would trigger high-risk classification. The assessment helps you make risk-informed product decisions before committing development resources — not after.

You are responsible for the organisation's regulatory posture and need a structured starting point for the AI Act compliance programme. The assessment provides the classification foundation that your compliance roadmap will build on.

Procurement and vendor management

You are evaluating third-party AI tools and need to understand the risk tier of systems you are deploying. As a deployer, you have your own obligations — and they depend on the provider's risk classification.

Consultants and advisors

You are advising clients on AI Act compliance and need a rapid, evidence-based classification tool to use in client engagements. The assessment provides a structured output that serves as the basis for deeper advisory work.

Case studies: three companies, three classifications

Case study 1: High-risk — HR tech startup

Company: A 15-person startup providing an AI-powered applicant tracking system that screens CVs, ranks candidates, and generates shortlists for recruiters.

Assessment result: High-risk (Annex III, area 4 — employment, specifically AI used for recruitment decisions). The system directly influences which candidates progress and which are filtered out, meeting the threshold for significant impact on individuals' employment opportunities.

Role: Provider (they develop and market the system under their own brand).

Obligations identified:

  • Implement a risk management system addressing bias, accuracy, and robustness risks in the recruitment context.
  • Prepare Annex IV technical documentation covering the model architecture, training data composition, evaluation metrics, and known limitations.
  • Conduct a self-assessment conformity procedure (Annex VI) — notified body not required for employment AI.
  • Implement logging so deployer-employers can audit which candidates were ranked and why.
  • Establish post-market monitoring for bias drift and accuracy degradation.
  • Register in the EU database before market placement.

Next steps taken: The team followed the compliance checklist, built their AI systems inventory, and began drafting Annex IV documentation.

Case study 2: Limited risk — Customer support SaaS

Company: A 50-person SaaS company that provides an AI chatbot for e-commerce customer support. The chatbot answers product questions, processes returns, and escalates complex issues to human agents. It does not make decisions with legal or significant effects — it facilitates customer service interactions.

Assessment result: Limited risk — the chatbot is an AI system interacting directly with natural persons, triggering Article 50 transparency obligations. It is not high-risk because customer support chatbots for general retail queries do not fall within any Annex III area and do not make decisions with legal or significant effects on individuals.

Role: Provider (they develop and sell the chatbot product).

Obligations identified:

  • Implement disclosure that users are interacting with an AI system at the start of every conversation.
  • If the chatbot generates text published on behalf of the e-commerce client, ensure AI-generation labelling is in place.
  • No Chapter III obligations (risk management system, conformity assessment, etc.) required.

Additional considerations: Although the AI Act obligations are limited, the chatbot processes personal data (customer names, order numbers, conversation content), so GDPR compliance remains essential. The company also needs to ensure its privacy policy covers the chatbot's data processing.

Next steps taken: The team added a persistent AI disclosure banner to the chatbot interface and updated their privacy policy to describe the chatbot's data processing.

Case study 3: Minimal risk — Internal analytics tool

Company: A 200-person manufacturing firm using an internally-developed ML model to predict equipment maintenance needs based on sensor data (vibration, temperature, pressure). The system alerts maintenance teams to schedule proactive servicing. It does not interact with the public, does not process personal data, and does not make decisions affecting individuals.

Assessment result: Minimal risk — predictive maintenance on industrial equipment using non-personal sensor data does not fall within any Annex III area, does not involve any prohibited practice, and does not interact with natural persons in a way that triggers Article 50.

Role: Deployer of their own internal system (they developed it for their own use; they are not placing it on the market).

Obligations identified:

  • No specific AI Act obligations.
  • AI literacy requirement (already in force): ensure maintenance staff using the system have sufficient understanding of its capabilities and limitations.
  • Consider voluntary codes of conduct for responsible AI use.

Next steps taken: The company documented the system in their internal AI inventory for governance purposes and provided training to maintenance teams on interpreting the model's alerts.

What to do after classification

Classification is the starting line, not the finish. Here is the post-classification workflow:

For high-risk systems

  1. Build your AI systems inventory — Record the classification result, intended purpose, risk tier, and role for each system. See How to build an AI systems inventory.
  2. Follow the compliance checklist — The EU AI Act compliance checklist for 2026 provides a step-by-step action plan.
  3. Start technical documentation — Begin drafting Annex IV documentation as early as possible. This is typically the most time-consuming workstream.
  4. Conduct a FRIA — If you are a deployer (especially a public-sector deployer), complete a fundamental rights impact assessment.
  5. Prepare for conformity assessment — Determine whether self-assessment or a notified body applies.
  6. Coordinate with GDPR compliance — If your system processes personal data, align your AI Act programme with your GDPR obligations. See EU AI Act vs. GDPR.

For limited-risk systems

  1. Implement transparency disclosures — AI interaction notices, content labelling, emotion recognition/biometric categorisation notifications as applicable.
  2. Update your privacy policy — See the privacy policy checklist for AI companies.
  3. Document the classification — Record why the system is limited-risk, not high-risk. This is your evidence if a regulator asks.

For minimal-risk systems

  1. Document the classification decision — Record the assessment and reasoning in your internal records.
  2. Meet AI literacy requirements — Already enforceable since February 2025.
  3. Consider voluntary best practices — Codes of conduct and responsible-AI frameworks build trust with customers and regulators.
  4. Monitor for reclassification — If the system's intended purpose changes, or if the Commission updates the Annex III list, reassess.

AI literacy requirements — already in force

One obligation applies to all organisations operating AI systems, regardless of risk tier: Article 4 requires that providers and deployers ensure their staff and other persons dealing with AI on their behalf have a sufficient level of AI literacy, taking into account:

  • Their technical knowledge and experience.
  • The context in which the AI systems are used.
  • The persons or groups of persons on whom the AI systems are to be used.

This obligation has been enforceable since 2 February 2025. Practical steps include:

  • Training programmes for employees who interact with AI systems.
  • Role-specific education (technical teams need different training than end users).
  • Documented records of training provided.
  • Periodic refresher training as AI systems and regulations evolve.

Upcoming deadlines

DateMilestoneAction required
2 February 2025 (past)Prohibited practices enforceable; AI literacy in forceVerify no prohibited practices; begin AI literacy training
2 August 2025GPAI model obligations; governance frameworkGPAI model providers must comply with Articles 51–56
2 August 2026High-risk obligations; deployer obligations; Article 50 transparencyFull Chapter III compliance for high-risk systems; deployer duties under Article 26; transparency obligations
2 August 2027Pathway A obligations (safety components under Annex I)AI systems in regulated products (medical devices, machinery, etc.) must comply

The 2 August 2026 deadline is the most operationally demanding. Organisations that have not yet classified their AI systems are already behind the recommended timeline for achieving compliance by that date.

Common mistakes in risk classification

  1. Classifying based on technology, not purpose. A large language model is not inherently high-risk. A large language model used to screen job applications is. Classification depends on intended purpose and context, not the underlying technology.
  2. Ignoring the Annex III update mechanism. The European Commission can add new high-risk use cases to Annex III through delegated acts. A system classified as minimal risk today could become high-risk in the future if the list is expanded.
  3. Assuming deployers do not need to classify. Deployers of high-risk AI systems have their own obligations, and they need to verify the provider's classification. If a deployer changes the intended purpose of a system, the system must be reclassified — and the deployer may become a provider.
  4. Relying on the Article 6(3) exception without documentation. The exception for Annex III systems that do not pose significant risk is narrow. If you rely on it, document your reasoning thoroughly — regulators will expect evidence, not assertions.
  5. Classifying once and never revisiting. Classification must be reassessed when the system's intended purpose changes, when the system is modified (e.g., fine-tuned for a new use case), or when the regulatory framework is updated. Build reassessment triggers into your compliance process.
  6. Conflating EU AI Act classification with internal risk assessment. Your internal risk framework may label a system as "medium risk" for business purposes, but the AI Act has its own binary logic: either a system falls within an Annex III area (high-risk) or it does not. Internal labels do not override regulatory classification.

Frequently asked questions

Is the risk classification assessment really free?

Yes. The free AI Act assessment provides a full risk classification, role determination, and obligation summary at no cost. If you need to continue to detailed Annex IV documentation drafting and PDF export, that is available in the paid workflow — but the classification itself is free.

How accurate is the automated classification?

The assessment maps your description against the AI Act's classification criteria systematically. It provides a strong preliminary classification suitable for planning purposes. For borderline cases or systems that may qualify for the Article 6(3) exception, we recommend validating the result with legal counsel before finalising your compliance strategy.

Can I classify multiple AI systems?

Yes. Run the assessment separately for each distinct AI system. Different systems within the same organisation can have different risk tiers — an AI-powered customer support chatbot (limited risk) and an AI-based credit-scoring model (high-risk) in the same company require different compliance approaches.

What if my system spans multiple Annex III areas?

A single system can be high-risk under multiple Annex III categories (e.g., a biometric system used in law enforcement touches both area 1 and area 6). The system is classified as high-risk, and the obligations apply based on the most demanding applicable category. The assessment identifies all relevant areas.

Does the classification change if I modify my AI system?

Yes. If you modify the intended purpose, expand the deployment context, or substantially change the system's functionality, you must reassess the classification. Adding a new use case (e.g., adapting an internal analytics tool to evaluate employee performance) can shift a minimal-risk system to high-risk.

What should I bring to the assessment?

You should be able to describe: what the AI system does, what decisions or outputs it produces, who is affected by those outputs, and the context in which it operates (industry, geography, user base). Technical details about the model architecture are helpful but not required for classification purposes.

Ready to classify your AI system? Run the free AI Act assessment and get your risk classification, applicable obligations, and next steps in under five minutes.

For the full regulatory picture, explore the complete AI Act guide.

Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.

AI Act
Compliance
Risk Classification
Free Tool
Assessment
Launch

Prüfen Sie die Compliance Ihres KI-Systems

Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.

Kostenlose Bewertung starten