Article 79: Procedure at National Level for AI Systems Presenting a Risk
Article 79 establishes the national enforcement procedure when a market surveillance authority finds that an AI system presents a risk to health, safety, fundamental rights, or the environment — even if the system formally complies with the Regulation (the "compliant but risky" scenario). The authority must require the provider to take corrective action within a prescribed period. If the provider fails to comply, the authority may restrict, prohibit, withdraw, or recall the system. The authority must notify the Commission and other Member States of its measures. This is the "Article 79 risk" referenced throughout the Act (e.g., in Article 12 logging requirements). Always verify on EUR-Lex.
Who does this apply to?
- -Market surveillance authorities conducting national enforcement actions
- -Providers required to take corrective action on AI systems presenting a risk
- -Importers and distributors in the supply chain who may face withdrawal or recall orders
- -Deployers operating systems that are subject to authority restrictions or prohibitions
Scenarios
A high-risk AI system used for social benefits eligibility assessment has passed conformity assessment and bears the CE marking. However, post-market monitoring reveals it systematically underestimates the needs of single-parent households, leading to fundamental rights concerns.
A market surveillance authority in Germany identifies that a high-risk AI system for critical infrastructure monitoring has a cybersecurity vulnerability that could be exploited, presenting a risk to safety even though the system met the Article 15 requirements at the time of placement.
The 'compliant but risky' safeguard (plain terms)
Article 79 addresses a critical enforcement gap: an AI system may technically comply with all AI Act requirements (conformity assessment passed, CE marking applied, documentation complete) but still present a real-world risk that was not anticipated or that emerged post-deployment. This "compliant but risky" scenario triggers Article 79:
- The market surveillance authority evaluates whether the system presents a risk to health, safety, fundamental rights, or the environment
- The risk assessment is outcome-based — it looks at what the system actually does in practice, not just whether the paperwork is in order
- This parallels the safeguard clause mechanism in EU product safety law (NLF), adapted for AI-specific risks including fundamental rights and environmental harm
- The authority acts unilaterally at the national level — Union-level safeguard procedures (Article 80–81) follow if the measure is challenged or needs Union-wide application
Enforcement escalation ladder
Article 79 establishes a graduated enforcement process:
Step 1 — Evaluation: The authority conducts an overall assessment of the AI system, covering all relevant AI Act requirements.
Step 2 — Corrective action request: If the authority finds a risk, it requires the provider (and where applicable the importer, distributor, or deployer) to take appropriate corrective action to bring the system into compliance and eliminate the risk, within a prescribed and proportionate period.
Step 3 — Escalated measures: If the provider does not comply within the prescribed period, the authority may: - Restrict making the system available on the market - Prohibit the system's placement on the market or putting into service - Withdraw the system from the market - Recall the system from end users
Step 4 — Notification: The authority must inform the Commission, other Member States, and the relevant economic operators of the measures taken.
How Article 79 connects to the rest of the Act
- Article 12 — Logging requirements reference Article 79 risks as a reason for maintaining comprehensive logs.
- Article 73 — Serious incident reports may trigger Article 79 evaluation.
- Article 74 — Market surveillance powers used during the Article 79 evaluation.
- Article 80 — Procedure for formally non-compliant systems (complementary to Article 79's risk-based procedure).
- Article 81 — Union safeguard procedure that may follow an Article 79 national measure.
- Article 16 — Provider obligations that the authority verifies during evaluation.
- Article 99 — Penalties for non-compliance with corrective action orders.
- Article 113 — Application dates and staged entry into force.
Recitals (preamble) on EUR-Lex
The recitals in the consolidated AI Act on EUR-Lex explain that the Article 79 safeguard mechanism is necessary because conformity assessment is a point-in-time check, while AI risks may evolve post-deployment due to data drift, adversarial inputs, or deployment in unanticipated contexts. The procedure is modelled on analogous provisions in the New Legislative Framework (Decision 768/2008/EC) but expanded to cover fundamental rights and environmental risks specific to AI. Consult the official preamble on EUR-Lex.
Compliance checklist
- Maintain post-market monitoring (Article 72) to detect emerging risks before authorities do — proactive risk management reduces Article 79 exposure.
- Build internal procedures for responding to authority corrective action requests within prescribed timelines.
- Designate a single point of contact for market surveillance authority communications in each Member State.
- Prepare corrective action capabilities: system updates, bias corrections, performance patches — on a rapid timeline.
- If you receive an Article 79 corrective action request: document all actions taken and evidence of risk elimination.
- For importers and distributors: understand your supply chain liability — an Article 79 withdrawal or recall order may extend to you.
- Monitor notifications from other Member States — an Article 79 measure in one country signals risk exposure across the Union.
Prepare for enforcement—start the free assessment.
Start Free AssessmentRelated Articles
Article 12: Record-keeping
Article 16: Obligations of Providers of High-Risk AI Systems
Article 73: Reporting of Serious Incidents
Article 74: Market Surveillance and Control of AI Systems in the Union Market
Article 80: Procedure for Dealing with AI Systems Classified by the Provider as Non-High-Risk in Application of Annex III
Article 81: Union Safeguard Procedure
Article 99: Penalties for AI Act Infringements
Article 113: Entry into Force and Application Dates
Frequently asked questions
Can an authority act under Article 79 if my system passed conformity assessment?
Yes — this is the core purpose of Article 79. Conformity assessment is a point-in-time check. If the system later presents a risk (due to data drift, adversarial inputs, deployment context, or emerging evidence), Article 79 empowers the authority to require corrective action regardless of prior compliance status.
How much time do I get to take corrective action?
The authority sets a prescribed period that must be proportionate to the nature of the risk. For imminent risks to health or safety, the period may be very short. The exact timeline is determined case by case — EUR-Lex does not prescribe a fixed duration.
What is the difference between Article 79 and Article 80?
Article 79 addresses systems that present a risk (even if formally compliant) — the 'compliant but risky' scenario. Article 80 addresses systems that are formally non-compliant (missing CE marking, inadequate documentation, no conformity assessment). Both lead to corrective measures, but the legal basis and trigger differ.