Article 14: Human oversight
Article 14 requires high-risk AI systems to be designed and developed—including with suitable human–machine interface tools—so they can be effectively overseen by natural persons during use. Human oversight must prevent or minimise health, safety, and fundamental-rights risks from intended use or reasonably foreseeable misuse, especially where risks persist despite other Section 2 requirements. Oversight measures must be commensurate with risk, autonomy, and context, implemented through provider-built measures and/or deployer measures identified by the provider. Paragraph 4 lists capabilities assignees must have: understanding capacities and limitations, monitoring for anomalies, automation bias awareness, output interpretation, ability to disregard/override/reverse outputs, and ability to stop the system safely. Paragraph 5 adds two-person verification for Annex III point 1(a) biometric identification systems, with an exception for law enforcement, migration, border control or asylum where Union or national law considers the requirement disproportionate.
Who does this apply to?
- -Providers designing oversight tooling, interfaces, and procedures before placing high-risk systems on the market
- -Deployers assigning competent humans and implementing provider-identified measures under Article 26
Scenarios
Loan officers see only a green/red score with no rationale, override, or escalation path.
Operators can pause inference, inspect feature attributions, route to manual review, and stop safely with audit trails.
Designing for effective oversight (plain terms)
Paragraphs 1–3 set the goal and proportionality rule: oversight must match risk, autonomy, and context, using provider-built measures where technically feasible and/or deployer measures the provider defines before placement.
Paragraph 4 translates that into concrete human capabilities: understanding limits, monitoring for anomalies, combating automation bias, interpreting outputs, disregarding/overriding outputs, and interrupting safely (e.g. stop control).
Paragraph 5 is a stricter layer for remote biometric identification under Annex III point 1(a): no action on an identification without separate two-person verification, except where Union or national law considers that requirement disproportionate for the purposes of law enforcement, migration, border control or asylum.
How Article 14 connects to the rest of the Act
- Article 13 — IFU must document Article 14 measures for deployers (Article 13(3)(d)).
- Article 12(3)(d) — Logging of verifiers links to Article 14(5) for Annex III point 1(a).
- Article 26 — Deployers must assign competent humans and implement oversight measures.
- Article 9 — Risk analysis should drive which oversight measures are commensurate.
- Article 27 — FRIA may intersect with high-risk oversight in public-sector deployments.
- Article 113 — Application dates.
Official wording: Article 14 — Human oversight (English)
The following reproduces the complete text of Article 14 from the English consolidated text of Regulation (EU) 2024/1689 (OJ L 2024/1689).
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.
3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:
(a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
(b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.
4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:
(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
(b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
(c) to correctly interpret the high-risk AI system's output, taking into account, for example, the interpretation tools and methods available;
(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
(e) to intervene in the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure that allows the system to come to a halt in a safe state.
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.
The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex contextualise human oversight, automation bias, and biometric identification safeguards. Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Run human-factors review on real workflows (not demos).
- Log overrides, stops, and time-to-intervention metrics.
- Train staff on automation bias and escalation paths.
- Validate low-confidence paths always surface to humans.
- Re-test after UI redesigns that could hide risk signals.
Map oversight gaps for your Annex III use case—free assessment.
Start Free AssessmentRelated Articles
Article 4: AI literacy
Article 9: Risk Management System
Article 11: Technical Documentation
Article 12: Record-keeping
Article 13: Transparency and provision of information to deployers
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 27: Fundamental Rights Impact Assessment
Annex III: High-Risk AI System Areas
Article 113: Entry into Force and Application Dates
Related annexes
- Annex IV — Technical documentation
Frequently asked questions
Does every decision need a human in the loop?
Not always, but humans must be able to intervene where risks warrant it. Fully autonomous operation requires stronger evidence that residual risks remain acceptable.
Who is responsible if the UI hides the stop button?
Providers must design effective oversight tools; deployers must implement assigned measures. Gaps in either layer can create liability and safety exposure—coordinate contractually and operationally.
Does Article 14 require a human to approve every AI decision?
Not necessarily. Article 14 requires that high-risk AI systems be designed to allow effective oversight by natural persons, but the level of oversight depends on the risk and context. For some systems a 'human-in-the-loop' is needed; for others, 'human-on-the-loop' (monitoring with the ability to intervene) suffices.