Article 76: Supervision of Testing in Real-World Conditions by Market Surveillance Authorities
Article 76 grants market surveillance authorities the power to supervise and oversee the testing of high-risk AI systems in real-world conditions. Authorities may require providers or prospective providers to supply information about the testing plan, may observe the testing, and may order the suspension or termination of testing if it poses a risk to health, safety, or fundamental rights. This provision complements the AI regulatory sandbox framework (Articles 57–60) and ensures that real-world testing under Article 60 does not become an enforcement gap. Always verify on EUR-Lex.
Who does this apply to?
- -Market surveillance authorities supervising real-world AI testing in their territory
- -Providers of high-risk AI systems conducting testing in real-world conditions under Article 60
- -Prospective providers testing AI systems in regulatory sandboxes (Articles 57–60)
- -Test subjects and affected persons whose rights are protected during real-world testing
Scenarios
A provider of a high-risk biometric verification system plans a real-world test at airport border control involving live passengers. The provider registers the test plan with the national market surveillance authority.
A prospective provider participates in a regulatory sandbox (Article 57) and transitions to real-world testing of a predictive maintenance AI for critical infrastructure. The testing plan is shared with the competent authority.
Authority powers over real-world testing (plain terms)
Article 76 equips market surveillance authorities with three core powers over real-world testing:
1. Information access: Authorities may require providers or prospective providers to submit the testing plan, including scope, duration, test subjects, safeguards, and expected outcomes 2. Observation: Authorities may observe the testing — either on-site or through regular reporting — to verify that conditions match the plan and that risks are managed 3. Suspension or termination: If testing presents a risk to health, safety, or fundamental rights, the authority may order the provider to suspend or terminate the test immediately
These powers apply whether the real-world testing occurs under the general Article 60 framework or within a regulatory sandbox (Articles 57–60).
Relationship with regulatory sandboxes
Real-world testing often takes place within or after an AI regulatory sandbox (Articles 57–60). Article 76 ensures that the supervisory authority retains full oversight even where sandbox conditions provide a controlled environment:
- Inside the sandbox: The sandbox authority and the market surveillance authority may be the same body or may coordinate
- Exiting the sandbox: When testing moves from sandbox to broader real-world conditions, Article 76 supervision intensifies
- Informed consent and safeguards for test subjects are enforced through Article 76 observation powers
- The article prevents sandbox participation from being used to circumvent normal market surveillance
How Article 76 connects to the rest of the Act
- Article 57–60 — Regulatory sandboxes and real-world testing conditions that Article 76 supervises.
- Article 74 — General market surveillance powers that underpin Article 76.
- Article 75 — Mutual assistance if real-world testing spans multiple Member States.
- Article 77 — Fundamental rights authorities may be alerted if testing raises rights concerns.
- Article 99 — Penalties for conducting testing in breach of conditions.
- Article 113 — Application dates and staged entry into force.
Recitals (preamble) on EUR-Lex
The recitals in the consolidated AI Act on EUR-Lex emphasise that real-world testing should be proportionate and time-limited, with appropriate safeguards for affected persons. The legislative intent is to allow innovation while ensuring that market surveillance authorities retain the ability to intervene swiftly. Consult the official preamble on EUR-Lex.
Compliance checklist
- Register your real-world testing plan with the competent market surveillance authority before commencing tests.
- Include in the plan: scope, duration, number and categories of test subjects, safeguards, data protection measures, and exit criteria.
- Implement informed consent mechanisms for test subjects where applicable.
- Prepare for authority observation — designate an internal liaison for on-site inspections.
- Build a rapid response procedure: if the authority orders suspension, you must be able to halt testing immediately.
- Document all testing results, incidents, and deviations from the plan for authority review.
- If testing spans multiple Member States, coordinate with all relevant authorities (Article 75 mutual assistance).
Prepare your real-world testing plan—start the free assessment.
Start Free AssessmentRelated Articles
Article 57: AI Regulatory Sandboxes
Article 60: Testing of High-Risk AI Systems in Real-World Conditions Outside AI Regulatory Sandboxes
Article 74: Market Surveillance and Control of AI Systems in the Union Market
Article 75: Mutual Assistance, Market Surveillance and Control of General-Purpose AI Systems
Article 77: Powers of Authorities Protecting Fundamental Rights
Article 99: Penalties for AI Act Infringements
Article 113: Entry into Force and Application Dates
Frequently asked questions
Does Article 76 apply only to high-risk AI systems?
Article 76 primarily targets the supervision of high-risk AI testing in real-world conditions (Article 60). However, regulatory sandbox provisions (Articles 57–60) may also cover non-high-risk systems under supervised conditions — consult EUR-Lex for the exact scope.
Can the authority stop my test without warning?
If the authority has reason to believe the testing presents an immediate risk to health, safety, or fundamental rights, it may order suspension or termination. In practice, authorities will typically request information first, but they retain the power to act urgently.
What happens if I test across multiple countries?
Each Member State's market surveillance authority retains jurisdiction over testing conducted in its territory. Article 75 mutual assistance coordinates information sharing between authorities, so a problem found in one country may trigger scrutiny in others.