Article 5: Prohibited AI Practices
Article 5 of the EU AI Act prohibits eight categories of AI practices that pose an unacceptable risk to people and fundamental rights. The prohibitions in Article 5(1) have been enforceable since 2 February 2025 (Article 113). If your use case matches any limb, you must not place that AI system on the EU market or put it into service. The complete official text is reproduced below.
Who does this apply to?
- -Providers placing prohibited AI systems on the Union market or putting them into service
- -Deployers using AI systems under their authority where the use pattern falls within Article 5(1)(a)–(h)
- -Product, policy, and procurement teams screening roadmaps before build-or-buy decisions
- -Law-enforcement and security teams evaluating biometrics, emotion inference, or public-space analytics in the Union
Scenarios
A city government scores residents on social media, payment history, and civic complaints to rank “trustworthiness” for access to public housing.
A retailer deploys cameras and emotion AI in the warehouse to infer workers’ moods during shifts.
A startup scrapes public profile photos from the web to grow a face-ID database without targeting specific persons.
A bank uses an AI model that predicts future offending for individuals based only on demographics and past minor offences, without a concrete investigation.
What Article 5 does (in plain terms)
Article 5 is the Act’s red line chapter: certain AI practices are banned outright in the Union. They are not “high-risk” systems you can paper over with conformity documentation—if you fall in, you must stop the practice or redesign so it is outside the prohibition.
Article 5(1) lists eight limbs (a)–(h). In headline form:
1. Subliminal or purposefully manipulative / deceptive techniques that materially distort behaviour to cause significant harm ((a)). 2. Exploiting vulnerabilities of a person or group (age, disability, social or economic situation) to distort behaviour and cause significant harm ((b)). 3. Public-authority social scoring over time from social behaviour or inferred characteristics, with detrimental treatment in unrelated or disproportionate ways ((c)). 4. Individual predictive criminal-risk assessments based solely on profiling or personality traits—not the same as human-led investigation support tied to objective, verifiable facts ((d)). 5. Untargeted scraping of facial images from the internet or CCTV to create or expand face-ID databases ((e)). 6. Inferring emotions in workplaces and education institutions, except narrow medical or safety carve-outs ((f)). 7. Biometric categorisation to infer sensitive attributes (race, beliefs, union membership, sexual life/orientation, etc.), subject to express carve-outs for lawful dataset labelling ((g)). 8. Real-time remote biometric identification in publicly accessible spaces for law enforcement, only within strict necessity tests, safeguards, prior authorisation, FRIA, and database registration as set out in Article 5(2)–(8) ((h)).
Pair this page with your lawyer: labels above are navigational, not substitutes for the authentic (a)–(h) definitions.
How Article 5 connects to the rest of the Act
- Article 1 — Subject matter: Chapter II prohibitions are one of the harmonised building blocks in Article 1(2)(b).
- Article 2 — Scope: prohibitions apply to providers and deployers caught by territorial and market hooks (including third-country actors linked to Union use).
- Article 3 — Definitions: terms such as provider, deployer, biometric data, and remote biometric identification matter for classifying a setup.
- Article 4 — AI literacy: teams must understand that Article 5 breaches are not fixable by “more documentation.”
- Article 6 + Annex III — High-risk tracks are only available where the practice is not prohibited.
- Article 50 — Transparency systems are a different risk tier; do not confuse with Chapter II bans.
- Article 27 + Article 49 — FRIA and registration for certain (h) deployments.
- Article 99 — Penalties for infringements (maximum tiers referenced on this page).
- Article 113 — Application dates (Article 5 prohibitions in force from 2 February 2025).
Cross-law: **(EU) 2016/679 (GDPR) remains relevant for biometric processing outside law-enforcement carve-outs—see the without prejudice line after (h)** on EUR-Lex Article 5.
Red-flag scenarios (non-exhaustive)
- Public-sector trust scores from unrelated behavioural signals → map to (c).
- Warehouse or office emotion analytics on staff → map to (f) unless a documented medical/safety pathway fits the legal carve-out.
- Bulk face harvesting from the open web or CCTV to grow an ID gallery → (e).
- Future crime risk scores from demographics/personality alone → (d).
- Dark-pattern or below-threshold persuasion campaigns designed to distort autonomous choice → (a).
- Inferring sensitive traits from biometrics for non-exempt purposes → (g).
- Live face-in-the-crowd policing without national frameworks, authorisation, FRIA, and registration → (h) plus Article 5(2)–(5).
Exceptions and safeguards (especially (h))
The headline exceptions for real-time remote biometric identification in publicly accessible spaces for law enforcement (Article 5(1)(h)) are summarised in many secondary tables—but only the authentic wording governs (h)(i)–(iii), the necessity / proportionality tests in Article 5(2), prior authorisation under Article 5(3) (with 24-hour urgency rules), notification under Article 5(4), Member State enabling law under Article 5(5), and reporting under Article 5(6)–(7).
Operational links in the Legalithm guide: Article 27 (fundamental rights impact assessment) and Article 49 (EU database registration) are explicitly referenced in Article 5(2) for certain deployments.
Annex II lists criminal offences that matter for (h)(iii)—see Annex II on EUR-Lex and the cross-link in this article’s related annexes.
Penalties and enforcement context
Serious infringements of Article 5 attract the Act’s highest administrative fines under Article 99 (EUR-Lex) (the figures on this page are ceilings; authorities apply Article 99(3) criteria).
- Undertakings: up to €35 million or 7% of total worldwide annual turnover in the preceding financial year, whichever is higher.
- SMEs and start-ups: lower maximum amounts in the same Article 99 table.
Treat monetary bands as risk to the business case, not as a licence to pay-to-play—prohibited practices must be engineered out, not budgeted for.
Official wording: Article 5 — Prohibited AI practices
The following reproduces Article 5 in full from the English consolidated text of Regulation (EU) 2024/1689.
1. The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;
(b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
(c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
(ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;
(d) the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;
(e) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
(g) the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;
(h) the use of \u2018real-time\u2019 remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:\n\n(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;\n\n(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;\n\n(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.\n\nPoint (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the processing of biometric data for purposes other than law enforcement.\n\n2. The use of \u2018real-time\u2019 remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements:\n\n(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used;\n\n(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.\n\nIn addition, the use of \u2018real-time\u2019 remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the \u2018real-time\u2019 remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay.\n\n3. For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a \u2018real-time\u2019 remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted.\n\nThe competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the \u2018real-time\u2019 remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the \u2018real-time\u2019 remote biometric identification system.\n\n4. Without prejudice to paragraph 3, each use of a \u2018real-time\u2019 remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data.\n\n5. A Member State may decide to provide for the possibility to fully or partially authorise the use of \u2018real-time\u2019 remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement within the limits and under the conditions listed in paragraph 1, first subparagraph, point (h), and paragraphs 2 and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, first subparagraph, point (h), including which of the criminal offences referred to in point (h)(iii) thereof, the competent authorities may be authorised to use those systems for the purposes of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof. Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric identification systems.\n\n6. National market surveillance authorities and the national data protection authorities of Member States that have been notified of the use of \u2018real-time\u2019 remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 4 shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance and data protection authorities with a template, including information on the number of the decisions taken by competent judicial authorities or an independent administrative authority whose decision is binding upon requests for authorisations in accordance with paragraph 3 and their result.\n\n7. The Commission shall publish annual reports on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, based on aggregated data in Member States on the basis of the annual reports referred to in paragraph 6. Those annual reports shall not include sensitive operational data of the related law enforcement activities.\n\n8. This Article shall not affect the prohibitions that apply where an AI practice infringes other Union law.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex explain the policy choices behind unacceptable-risk prohibitions (manipulation, vulnerability exploitation, social scoring, predictive policing, scraping, emotion inference, sensitive biometric inference, and live biometric surveillance). Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Screen every AI use case against the eight Article 5(1) prohibitions before design or procurement.
- Document why the system is outside prohibited practices (especially for biometrics, emotion inference, and public-sector scoring).
- For any law-enforcement biometric identification in public spaces, verify national law, judicial authorisation, and FRIA/registration obligations under the rest of the Act.
- Train product, legal, and security teams on the difference between prohibited practices, high-risk systems, and transparency-only obligations.
- Re-review when the model, data sources, or deployment context change (new geography, new user group, or new modality).
Not sure if your AI system involves prohibited practices? Use our free assessment tool.
Start Free AssessmentRelated Articles
Article 1: Subject matter
Article 2: Scope
Article 3: Definitions
Article 4: AI literacy
Article 6: Classification Rules for High-Risk Systems
Annex III: High-Risk AI System Areas
Article 27: Fundamental Rights Impact Assessment
Article 49: EU Database Registration
Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems
Article 99: Penalties for AI Act Infringements
Article 113: Entry into Force and Application Dates
Related annexes
- Annex II — Criminal offences referenced in Article 5(1)(h)(iii)
Frequently asked questions
Does Article 5 apply to my company if we are not in the EU?
Yes, if you place an AI system on the EU market, put it into service in the Union, or if the output is intended to be used in the Union. Third-country providers are in scope the same way as EU-based ones for these prohibitions.
Is emotion AI always banned?
No. Article 5(1)(f) targets inferring emotions in workplaces and educational institutions. Other contexts may still be regulated differently (for example as high-risk or under transparency rules), and medical or safety uses can be carved out where they meet the conditions in the legal text.
What is the difference between Article 5 and “high-risk” under Annex III?
Article 5 lists practices that must not be done at all. High-risk systems under Article 6 and Annex III may be placed on the market if they meet the Chapter III requirements. If you are unsure which bucket applies, start with Article 5, then assess classification.
Can we use facial recognition at all?
Many biometric use cases are not blanket-banned—for example verification to confirm identity for access is excluded from the remote biometric identification definition. Untargeted scraping to build databases and most real-time remote biometric identification in public spaces for law enforcement are the focus of the strictest rules.