All articles
AI Act Prohibited Practices: Article 5 Guide
AI Act

AI Act Prohibited Practices: Article 5 Guide

Complete guide to the 8 prohibited AI practices under Article 5 of the EU AI Act. Examples, penalties, enforcement since February 2025, and compliance checklist.

Legalithm Team25 min read
Share
Read time25 min
TopicAI Act
UpdatedJan 2026
Table of contents

AI Act Prohibited Practices: The Complete Article 5 Guide for 2026

TL;DR

  • The EU AI Act absolutely bans eight categories of AI practices under Article 5. These are not "high-risk" — they are prohibited, full stop.
  • The prohibitions have been enforceable since 2 February 2025, making them the first AI Act obligations to take legal effect. Organisations deploying banned AI systems are already exposed.
  • Violations carry the harshest penalties in EU digital regulation: up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher — exceeding even GDPR fines. See our full penalties breakdown.
  • The eight prohibited practices are: (1) subliminal or manipulative AI techniques, (2) exploitation of vulnerabilities, (3) social scoring by public authorities, (4) criminal risk profiling based solely on profiling, (5) untargeted facial recognition scraping, (6) emotion recognition in workplaces and education, (7) biometric categorisation for sensitive attributes, and (8) real-time remote biometric identification by law enforcement in public spaces.
  • Most prohibitions are absolute — no exceptions. A small number have narrow carve-outs (e.g., real-time biometric identification for law enforcement under strict judicial safeguards, emotion recognition for medical or safety purposes).
  • The European Commission published guidelines on prohibited practices in February 2025 to help organisations interpret the boundaries.
  • If you are unsure whether your AI system is prohibited, start with our free AI Act risk classification tool — it takes less than five minutes.

Article 5 of the EU AI Act draws the absolute boundary of what AI systems may never do in the European Union. Unlike high-risk AI systems that can be placed on the market if they meet strict requirements, prohibited practices are banned outright. No amount of technical documentation, conformity assessment, or risk management makes them lawful.

This guide covers all eight prohibited practices in detail — what the law says, what real-world systems are caught, which narrow exceptions exist, and how to determine whether your AI system crosses the line.

Why Article 5 matters — the highest-risk category in the AI Act

The EU AI Act organises AI systems into a risk-based pyramid. At the base are minimal-risk systems with no specific obligations. In the middle are limited-risk systems subject to transparency requirements. Higher up are high-risk systems with extensive technical and governance requirements. At the very top — above high-risk — sit the prohibited practices.

The distinction matters: high-risk AI systems can be legally deployed if they comply with the Act's requirements. Prohibited AI practices cannot be deployed at all, regardless of compliance efforts.

Enforceable since 2 February 2025

Article 5 prohibitions were the first AI Act obligations to become enforceable. While most of the Act's high-risk requirements apply from 2 August 2026, and GPAI model obligations from 2 August 2025, the prohibited practices took effect on 2 February 2025 — just six months after the Act entered into force.

This means that any organisation that was operating a banned AI system on 2 February 2025 became immediately non-compliant. There was no transition period. The European Commission and national market surveillance authorities have been empowered to act from that date.

Penalties: EUR 35 million or 7% of global annual turnover

Deploying a prohibited AI practice triggers the highest fine tier under Article 99 of the AI Act:

  • For large organisations: the higher of EUR 35 million or 7% of total worldwide annual turnover for the preceding financial year.
  • For SMEs and startups: the lower of those two amounts — a proportionality safeguard, but still potentially devastating.

To put this in perspective: GDPR's maximum fine is EUR 20 million or 4% of global turnover. The AI Act's prohibited-practice penalty is nearly double the GDPR ceiling by revenue percentage. The EU legislature sent a deliberate signal — these practices are considered so fundamentally incompatible with European values that the penalties must be extraordinary.

The 8 prohibited AI practices explained

Each of the eight prohibitions targets a specific category of AI use that the EU considers unacceptable. Below, we break down each one with the legal reference, a plain-language summary, real-world examples, and any exceptions.

1. Subliminal manipulation — Article 5(1)(a)

What the law says: The placing on the market, putting into service, or use of an AI system that deploys subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behaviour of a person or group in a manner that causes or is reasonably likely to cause that person, another person, or group significant harm.

What this means in practice: This prohibition targets AI systems that manipulate human decision-making through techniques people cannot perceive or defend against. The technique must operate below the threshold of awareness (subliminal) or be purposefully manipulative/deceptive, must materially distort behaviour, and must cause or be reasonably likely to cause significant harm — physical, psychological, financial, or to fundamental rights.

Real-world examples of what's banned:

  • An AI-powered advertising system that embeds imperceptible audio or visual cues in video content to drive purchasing behaviour, causing consumers significant financial harm.
  • A political campaigning AI that generates hyper-personalised disinformation tailored to exploit individual psychological profiles, materially distorting voting behaviour.
  • A dark-pattern AI engine that uses deceptive interface manipulation to trick users into subscribing to financial products they do not understand.

What is NOT banned: Standard AI-driven personalisation (product recommendations, content curation) generally does not qualify. Legitimate persuasion, marketing, and recommendation systems remain lawful where they operate transparently and do not cause significant harm.

Exceptions: None. This prohibition is absolute.

2. Exploitation of vulnerabilities — Article 5(1)(b)

What the law says: The placing on the market, putting into service, or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective or the effect of materially distorting their behaviour in a manner that causes or is reasonably likely to cause that person or another person significant harm.

What this means in practice: This prohibition recognises that certain groups — children, elderly people, persons with disabilities, people in financial distress — are particularly susceptible to AI-driven manipulation. It bans AI systems designed to take advantage of these vulnerabilities.

Real-world examples of what's banned:

  • An AI chatbot that targets elderly users with cognitive decline to sell them unnecessary insurance products by exploiting their confusion about existing coverage.
  • A predatory lending AI that identifies financially distressed individuals through behavioural data and serves them high-interest loan offers calibrated to exploit their desperation.
  • An AI-powered toy or app that manipulates children's behaviour to drive in-app purchases, exploiting their developmental inability to distinguish advertising from content.
  • A gambling platform's AI that identifies problem gamblers through behavioural markers and increases gambling prompts rather than implementing responsible gambling measures.

What is NOT banned: AI systems that serve vulnerable groups for their benefit — assistive technology, age-appropriate content filtering, accessibility tools — are not caught, provided they do not distort behaviour to cause harm.

Exceptions: None. This prohibition is absolute.

3. Social scoring by public authorities — Article 5(1)(c)

What the law says: The placing on the market, putting into service, or use of AI systems by public authorities or on their behalf for the evaluation or classification of natural persons or groups over a certain period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the social score leads to either or both of:

  • Detrimental or unfavourable treatment of persons in social contexts unrelated to the contexts in which the data was originally generated or collected.
  • Detrimental or unfavourable treatment that is unjustified or disproportionate to their social behaviour or its gravity.

What this means in practice: This is the "China-style social credit system" prohibition. It prevents EU public authorities from building AI systems that assign a general trustworthiness score to citizens based on their behaviour across multiple life domains, and then using that score to restrict their access to services or rights.

Real-world examples of what's banned:

  • A municipal government implementing an AI system that tracks residents' tax payment history, social media activity, and waste recycling behaviour to generate a "civic score" determining priority access to public housing or municipal services.
  • A national authority deploying an AI system that aggregates citizens' interactions with government agencies — benefits claims, police records, health service usage — into a behavioural profile used to determine eligibility for services across unrelated domains.
  • A public transport authority using an AI system to rate passengers based on fare compliance and social media posts, then restricting travel passes for low-scoring individuals.

What is NOT banned: Individual, context-specific assessments remain lawful. A tax authority can use AI to assess tax fraud risk based on tax-relevant data. The prohibition targets generalised scoring across unrelated domains and disproportionate consequences.

Exceptions: None for the described use pattern. Private-sector social scoring is not covered by this specific prohibition (though it may be caught by other prohibitions, GDPR, or consumer protection law).

4. Criminal risk profiling — Article 5(1)(d)

What the law says: The placing on the market, putting into service, or use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. This prohibition does not apply to AI systems used to support the human assessment of a person's involvement in criminal activity, which is already based on objective and verifiable facts directly linked to criminal activity.

What this means in practice: Predictive policing systems that flag individuals as likely future criminals based purely on profiling — without any objective facts linking them to actual criminal activity — are banned. The prohibition targets pre-crime prediction based on who someone is rather than what they have demonstrably done.

Real-world examples of what's banned:

  • A law enforcement AI system that analyses individuals' demographic data, neighbourhood, social connections, and behavioural patterns to generate a "criminal propensity score" without any connection to observed criminal conduct.
  • A border control AI that flags travellers as high-risk for criminal activity based solely on profiling attributes — nationality, travel patterns, family connections — without any objective intelligence linking them to specific offences.
  • A predictive policing platform that identifies "likely future offenders" based on personality assessments, psychological profiling, or social media behaviour without tying the assessment to verifiable criminal facts.

What is NOT banned: AI systems that assist human investigators in assessing criminal risk based on objective, verifiable facts — such as forensic evidence analysis, case-specific intelligence tools, or recidivism risk assessment grounded in a person's actual criminal record and case circumstances — are carved out. The distinction is between profiling-only prediction and fact-based assessment support.

Exceptions: AI systems supporting human assessment based on objective, verifiable facts directly linked to criminal activity are explicitly excluded from the prohibition.

5. Untargeted facial recognition scraping — Article 5(1)(e)

What the law says: The placing on the market, putting into service, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

What this means in practice: This prohibition directly targets the business model of companies like Clearview AI, which built facial recognition databases by scraping billions of photos from social media, news sites, and other public internet sources. The EU considers untargeted, mass-scale collection of biometric data for facial recognition to be fundamentally incompatible with privacy and data protection rights.

Real-world examples of what's banned:

  • A technology company that scrapes publicly available social media photos to build a facial recognition database offered to law enforcement or private-sector clients.
  • A security company that collects and stores facial images from CCTV cameras across multiple locations to build a general-purpose facial recognition dataset, even if the original CCTV use was lawful.
  • A research organisation that crawls the internet to compile a facial image dataset for training facial recognition models, regardless of whether the dataset is made commercially available.

What is NOT banned: Targeted, lawful collection of facial images for specific purposes — such as enrolling employees in an access control system with consent, or law enforcement collecting images of suspects pursuant to a warrant — is not covered. The key word is untargeted: the prohibition applies to mass, indiscriminate scraping, not purpose-specific collection.

Exceptions: None. The prohibition on untargeted scraping is absolute.

6. Emotion recognition in workplaces and education — Article 5(1)(f)

What the law says: The placing on the market, putting into service, or use of an AI system for emotion recognition of natural persons in the areas of workplace and education institutions, except where the AI system is intended to be put into service or placed on the market for medical or safety reasons.

What this means in practice: Employers and educational institutions cannot use AI systems that detect, classify, or infer emotional states of workers or students — whether through facial expressions, voice tone, body language, physiological signals, or other biometric/behavioural data.

Real-world examples of what's banned:

  • A call centre deploying AI that monitors agents' vocal tone and facial expressions to rate their "emotional engagement" and flag those deemed insufficiently empathetic.
  • A warehouse using AI cameras that track workers' facial expressions to identify "disengagement" and automatically alert supervisors.
  • A university using an AI proctoring system that monitors students' facial expressions during online exams to detect "suspicious" emotional states indicating cheating.
  • An office installing AI sensors that measure employees' stress levels through facial micro-expression analysis to "optimise workplace wellbeing."
  • A school deploying AI classroom monitoring that tracks students' attention via facial expression analysis.

What is NOT banned: Emotion recognition in other contexts (e.g., consumer research, entertainment, clinical therapy) falls outside this specific prohibition — though other AI Act and GDPR requirements may still apply. Importantly, the prohibition includes two explicit exceptions:

Exceptions:

  • Medical reasons: An AI system that detects emotional distress in a workplace or educational setting as part of a medical device or clinical tool (e.g., a system designed to detect signs of a medical emergency such as a panic attack or seizure) is exempt.
  • Safety reasons: An AI system designed to detect driver fatigue through emotional and attentiveness monitoring in transportation contexts, or a system that monitors operator alertness in safety-critical industrial environments, is exempt where safety is the primary purpose.

7. Biometric categorisation for sensitive attributes — Article 5(1)(g)

What the law says: The placing on the market, putting into service, or use of AI systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This prohibition does not apply to any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorisation of biometric data in the area of law enforcement.

What this means in practice: AI systems that analyse biometric characteristics — face, voice, gait — to classify people by race, religion, political beliefs, sexual orientation, or other sensitive categories are banned. This targets pseudoscientific AI applications claiming to infer deeply personal attributes from biological characteristics.

Real-world examples of what's banned:

  • An AI system that analyses facial features to classify individuals by ethnic origin or race for any purpose — security profiling, demographic analytics, targeted marketing.
  • A technology that claims to identify a person's sexual orientation from facial geometry or voice analysis (so-called "gaydar AI").
  • An AI system at a protest that analyses participants' biometric data to infer and catalogue their political opinions or trade union membership.
  • A hiring platform that uses voice or facial analysis AI to infer candidates' religious beliefs or philosophical convictions.

Exceptions: Labelling or filtering of lawfully acquired biometric datasets in law enforcement is excluded — for example, categorising facial images by hair colour or apparent age for search purposes — but only where the categorisation does not target the listed sensitive attributes.

8. Real-time remote biometric identification by law enforcement — Article 5(1)(h)

What the law says: The use of real-time remote biometric identification systems (RBI) in publicly accessible spaces for the purposes of law enforcement, except and insofar as such use is strictly necessary for one of the following objectives:

  • (i) The targeted search for specific victims of abduction, trafficking in human beings, or sexual exploitation, and the search for missing persons.
  • (ii) The prevention of a specific, substantial, and imminent threat to the life or physical safety of natural persons, or a genuine and present or foreseeable threat of a terrorist attack.
  • (iii) The localisation or identification of a person suspected of having committed a criminal offence listed in Annex II (which references serious crimes punishable by custodial sentences of at least four years in the relevant Member State).

What this means in practice: Live facial recognition and other real-time biometric identification in public spaces by law enforcement are subject to a general ban with narrow exceptions. The general rule is prohibition; the exceptions are limited, conditional, and subject to significant procedural safeguards.

Real-world examples of what's banned:

  • Police deploying live facial recognition cameras at a music festival to scan all attendees against a general watchlist without any specific, imminent threat or targeted search objective.
  • A city government installing permanent real-time facial recognition at public transport hubs to continuously monitor all commuters for general crime prevention.
  • Law enforcement running live biometric identification across public CCTV networks to identify anyone with an outstanding arrest warrant, without the procedural safeguards required by the Act.

What is permitted under the exceptions:

  • Police deploying real-time facial recognition at a specific location for a limited period to search for a child who has been abducted, where there is evidence the child may be in the area.
  • Security services activating real-time biometric identification at a major public event after receiving specific, credible intelligence of an imminent terrorist attack at that location.
  • Law enforcement using real-time facial recognition in a defined area to locate a suspect in a murder investigation (a serious offence under Annex II), subject to judicial authorisation.

Procedural safeguards required for exceptions: Even when an exception applies, the use must comply with strict conditions under Article 5(2)-(7):

  • A prior authorisation from a judicial authority or an independent administrative authority is required (except in cases of duly justified urgency, where authorisation must be sought within 24 hours).
  • The use must be limited in time, geographic scope, and the number of persons to be identified.
  • A fundamental rights impact assessment must be completed before deployment.
  • The deploying authority must notify the relevant market surveillance authority and data protection authority.
  • The use must be necessary and proportionate — a general principle that applies in addition to the specific conditions.

This is the most conditionally structured prohibition in Article 5, reflecting the political compromise reached during the Act's negotiation.

Comparison table — all 8 prohibited practices at a glance

#PracticeWhat's bannedExceptionsPenalty
1Subliminal manipulation (Art. 5(1)(a))AI using subliminal, manipulative, or deceptive techniques that distort behaviour and cause significant harmNoneUp to EUR 35M / 7% turnover
2Exploitation of vulnerabilities (Art. 5(1)(b))AI exploiting age, disability, or socioeconomic vulnerabilities to distort behaviour causing significant harmNoneUp to EUR 35M / 7% turnover
3Social scoring (Art. 5(1)(c))Public authority AI that scores people based on social behaviour for disproportionate or unrelated detrimental treatmentPrivate-sector scoring not covered by this specific prohibitionUp to EUR 35M / 7% turnover
4Criminal risk profiling (Art. 5(1)(d))AI assessing criminal risk based solely on profiling/personality traits without objective verifiable factsAI supporting human assessment based on objective, verifiable facts linked to criminal activityUp to EUR 35M / 7% turnover
5Untargeted facial scraping (Art. 5(1)(e))Creating/expanding facial recognition databases via untargeted scraping of internet or CCTV imagesNoneUp to EUR 35M / 7% turnover
6Workplace/education emotion recognition (Art. 5(1)(f))AI inferring emotions of workers or studentsMedical reasons; safety reasonsUp to EUR 35M / 7% turnover
7Biometric categorisation for sensitive attributes (Art. 5(1)(g))AI using biometrics to infer race, political opinions, religion, sexual orientation, etc.Lawful labelling/filtering of biometric datasets in law enforcementUp to EUR 35M / 7% turnover
8Real-time biometric ID by law enforcement (Art. 5(1)(h))Live biometric identification in public spaces for law enforcementTargeted search for victims, imminent threats, serious crime suspects — with judicial authorisationUp to EUR 35M / 7% turnover

How to check if your AI system is prohibited — practical self-assessment

If you are unsure whether an AI system you develop, deploy, or procure falls within Article 5, use this structured self-assessment. For a quick automated check, try our free AI Act risk classification tool.

Step 1: Identify the AI system's function

Describe precisely what the AI system does — its inputs, processing logic, and outputs. Focus on the technical and functional reality, not marketing language.

Step 2: Check against each prohibition

Work through each of the eight prohibited categories:

  1. Manipulation check: Does the system use techniques that operate below the threshold of user awareness, or that are purposefully deceptive or manipulative? Could these techniques distort behaviour and cause significant harm?
  2. Vulnerability check: Does the system specifically target or disproportionately affect people because of their age, disability, or socioeconomic situation in a way that distorts their behaviour and causes harm?
  3. Social scoring check: Is the system used by or on behalf of a public authority to generate generalised social behaviour scores with cross-domain consequences?
  4. Criminal profiling check: Does the system predict criminal risk based solely on profiling or personality characteristics, without being grounded in objective facts linked to actual criminal conduct?
  5. Facial scraping check: Does the system create or expand facial recognition databases using untargeted internet or CCTV scraping?
  6. Emotion recognition check: Does the system infer emotions in a workplace or educational context? If so, is it genuinely for medical or safety purposes?
  7. Biometric categorisation check: Does the system categorise individuals based on biometric data to infer race, political opinions, religion, sexual orientation, or other sensitive attributes?
  8. Real-time biometric ID check: Does the system perform live biometric identification in public spaces for law enforcement? If so, does it meet one of the three narrow exceptions with the required procedural safeguards?

Step 3: Document your assessment

Even if you conclude your system is not prohibited, document the analysis. This demonstrates diligence if a regulator questions your system, feeds into your broader AI governance framework and compliance checklist, and ensures the assessment is repeatable as the system evolves.

Step 4: If in doubt, escalate

If your assessment is inconclusive — particularly for systems near the boundaries of manipulation, vulnerability exploitation, or emotion recognition — seek legal counsel or engage with your national market surveillance authority. The cost of a legal opinion is trivial compared to a Tier 1 fine.

Interaction with other AI Act requirements

Understanding prohibited practices does not exist in a vacuum. Article 5 interacts with several other parts of the AI Act framework:

High-risk AI systems (Articles 6-49)

An AI system that is not prohibited may still be classified as high-risk. In fact, many use cases sit close to the boundary. For example:

  • An emotion recognition system used in a clinical setting (not workplace/education) is not prohibited under Article 5(1)(f), but may be classified as high-risk under Annex III if it falls within the health domain.
  • A law enforcement AI tool that assesses criminal risk based on objective facts (not solely profiling) is exempt from Article 5(1)(d) but is classified as high-risk under Annex III Domain 6 (law enforcement).
  • A biometric identification system used post-facto (not in real-time) by law enforcement is not caught by Article 5(1)(h) but is high-risk under Annex III.

The practical implication: passing the Article 5 assessment does not end your compliance journey. It is the first gate. Systems that clear Article 5 must still be assessed for high-risk classification and, if applicable, must meet the full suite of obligations under Articles 8-27.

General-purpose AI models (Articles 51-56)

General-purpose AI (GPAI) models — such as large language models — are regulated separately under Articles 51-56. However, a GPAI model could be integrated into a downstream system that constitutes a prohibited practice. The GPAI model provider has obligations regarding foreseeable misuse, and a deployer who builds a prohibited application on top of a GPAI model is liable under Article 5.

For example, a company that fine-tunes a large language model to create a social scoring system for a public authority would be deploying a prohibited practice — regardless of the fact that the underlying GPAI model is lawful.

AI bias and fairness

Several prohibited practices intersect with AI bias and fairness concerns. Biometric categorisation for sensitive attributes (Article 5(1)(g)) directly addresses the use of AI to make inferences about protected characteristics. Even for AI systems that are not prohibited, bias testing and fairness requirements under the high-risk framework (particularly Article 10 on data governance) serve a complementary function — preventing AI systems from producing discriminatory outcomes even where the system itself is not designed to categorise by sensitive attributes.

European Commission guidelines on prohibited practices (February 2025)

In February 2025, coinciding with the enforcement date of Article 5, the European Commission published non-binding guidelines on the interpretation of prohibited practices. These guidelines are significant because they provide the Commission's view on several ambiguous boundary questions:

Key clarifications in the guidelines

  • Significant harm threshold: The guidelines clarify that "significant harm" in Articles 5(1)(a) and 5(1)(b) is not limited to physical harm. It encompasses psychological harm, financial harm, and harm to fundamental rights. The assessment must consider the severity, breadth, and irreversibility of the harm.
  • Subliminal vs. merely persuasive: The guidelines distinguish between AI-driven persuasion (lawful in most cases) and subliminal manipulation (prohibited). The critical factor is whether the individual has the ability to perceive and resist the influence. Standard recommendation algorithms that users are aware of and can control generally do not qualify as subliminal. However, the guidelines warn that increasingly sophisticated personalisation could cross the line.
  • Social scoring scope: The guidelines confirm that the prohibition applies to public authorities and those acting on their behalf, including private companies contracted by public authorities. Purely private social scoring is not caught by Article 5(1)(c) specifically — though it may constitute manipulation or vulnerability exploitation under Articles 5(1)(a)-(b), or violate GDPR provisions.
  • Emotion recognition boundary: The guidelines provide examples of what constitutes "emotion recognition" versus general biometric processing. Detecting whether a person is present (biometric detection), verifying their identity (biometric verification), or identifying them (biometric identification) is not emotion recognition. Emotion recognition specifically involves inferring emotional states — happiness, sadness, anger, fear, surprise, disgust, or more granular emotional classifications.
  • Medical and safety exceptions: The guidelines confirm that the medical/safety exceptions to the emotion recognition prohibition must be interpreted narrowly. A system that monitors driver drowsiness qualifies. A system that monitors worker "mood" under the guise of "workplace wellbeing" generally does not.

Practical impact

While the guidelines are non-binding, they carry significant interpretive weight. National market surveillance authorities and courts are likely to consider them when assessing potential violations. Organisations should review the guidelines alongside their Article 5 self-assessments and update their analysis accordingly.

Frequently asked questions

Does Article 5 apply to AI systems developed outside the EU?

Yes. The AI Act applies based on where the AI system's output is used, not where it was developed. A company based outside the EU that provides a prohibited AI system to EU customers — or whose system affects people within the EU — is subject to the prohibition and associated penalties. See our broader overview of the AI Act's scope.

Can a private company deploy social scoring?

Article 5(1)(c) specifically prohibits social scoring by public authorities or on their behalf. A private company is not directly caught by this prohibition. However, such a system could still be prohibited under Article 5(1)(a) (manipulation) or 5(1)(b) (exploitation of vulnerabilities), and would likely face challenges under GDPR Articles 9 and 22. In practice, private social scoring still faces significant legal risk — just through different legal bases.

Is all emotion recognition banned?

No. The prohibition is limited to emotion recognition in the workplace and education institutions. Emotion recognition in other contexts — healthcare, entertainment, consumer research, automotive safety — is not prohibited under Article 5(1)(f), although it may be classified as high-risk or subject to other requirements. Additionally, even within the workplace and education, there are exceptions for medical and safety purposes.

What happens to companies already using tools like Clearview AI in the EU?

Any use of a facial recognition database built through untargeted scraping of internet or CCTV images has been prohibited since 2 February 2025. Companies using such tools in the EU must discontinue their use immediately. Continuing to use them exposes the organisation to Tier 1 penalties. Several EU data protection authorities had already taken enforcement action against untargeted facial recognition scraping under GDPR (notably against Clearview AI); the AI Act now provides an additional — and more explicitly targeted — legal basis for prohibition.

How does the real-time biometric identification exception actually work in practice?

The exception requires prior judicial or independent administrative authorisation (except in urgent cases, where authorisation must be sought within 24 hours). The deploying law enforcement authority must demonstrate that the use is strictly necessary for one of the three permitted objectives (victim search, imminent threat prevention, or serious crime suspect identification). Each use must be limited in time, geographic scope, and the number of persons targeted. A fundamental rights impact assessment must be completed. The market surveillance authority and data protection authority must be notified. Member States may choose to impose additional restrictions or prohibit the use entirely under national law.

What should I do if my AI system is borderline?

Start with our free AI Act risk classification tool for an initial assessment. Then conduct a formal internal assessment following the self-assessment steps above and document your reasoning. If the assessment remains unclear, consult legal counsel specialising in EU AI regulation. Consider engaging proactively with your national market surveillance authority — early engagement is far preferable to enforcement action.

Key takeaways and next steps

Article 5 of the EU AI Act draws a clear line: certain AI practices are so harmful that no compliance framework can make them acceptable. These prohibitions have been live since February 2025, and the penalties for violation are the most severe in EU digital regulation.

For organisations operating AI systems in the EU market, the practical steps are:

  1. Audit your AI portfolio against all eight prohibited categories using a structured compliance checklist.
  2. Run the quick classification using our free AI Act risk assessment to identify any red flags.
  3. Document every assessment — even "not prohibited" conclusions — as part of your AI governance framework.
  4. Monitor Commission guidance and enforcement actions for interpretive developments.
  5. For systems near the boundary, engage legal counsel and consider proactive dialogue with your national authority.
  6. If a system is prohibited, discontinue it immediately. The financial and reputational cost of Tier 1 enforcement far exceeds any operational benefit.

The prohibited practices are only one layer of the AI Act. Systems that pass Article 5 must still be assessed for high-risk classification, transparency obligations, and provider or deployer responsibilities. But Article 5 is where it starts — the non-negotiable foundation of the EU's AI regulatory framework.

This guide is provided for informational purposes. It does not constitute legal advice. For organisation-specific compliance guidance, consult qualified legal counsel or contact the Legalithm team.

AI Act
Prohibited Practices
Article 5
Compliance
Banned AI
Social Scoring
Biometric AI

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment