Article 9: Risk Management System
Article 9 requires providers of high-risk AI systems to establish, document, implement, and maintain a risk management system across the entire lifecycle. It must identify known and foreseeable risks, estimate post-mitigation residual risk, and be updated when substantial modifications occur.
Who does this apply to?
- -Providers of high-risk AI systems under Article 6
Scenarios
A provider ships a medical imaging AI without documented hazard analysis or residual-risk acceptance criteria.
After deployment, new edge cases emerge that increase false negatives; the provider updates the risk file and re-tests.
What the risk management system must cover
The system must be a continuous, iterative process run throughout the entire lifecycle of the high-risk AI system. It should:
- Identify and analyse known and foreseeable risks to health, safety, and fundamental rights
- Evaluate risks that may emerge when the system is used according to instructions and under conditions of reasonably foreseeable misuse
- Adopt suitable risk management measures
- Residual risk must be judged acceptable after mitigation; where that is not possible, providers must justify strong reasons and implement additional measures
Testing must use metrics appropriate to the intended purpose and latest generally acknowledged methods.
Testing and data
Article 9 interacts tightly with Article 10 (data governance) and Article 15 (accuracy, robustness, cybersecurity). Your risk file should trace how training, validation, and test data support the claimed performance, and how cybersecurity and robustness controls reduce identified risks.
Documentation and updates
Outputs of risk management feed Annex IV technical documentation and post-market monitoring (Article 72). When you make substantial modifications, reassess risks and update documentation before further placing on the market or putting into service.
How Article 9 connects to Articles 7–8 and classification
- Article 8 — Article 8(1) requires the Article 9 risk management system to be taken into account when ensuring compliance with Chapter III, Section 2.
- Article 6 + Annex III — Risk analysis should map to Annex III harms and any Article 6(3) position.
- Article 7 — If Annex III changes via delegated acts, re-open hazard assumptions and residual-risk acceptance.
- Article 11 + Annex IV — Risk outputs feed technical documentation.
- Article 113 — Application dates for operationalising the risk system.
Official wording: Article 9 — Risk management system
The following reproduces Article 9 in full from the English consolidated text of Regulation (EU) 2024/1689.
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:
(a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;\n\n(b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;\n\n(c) the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;\n\n(d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a).\n\n3. The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.\n\n4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.\n\n5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable.\n\nIn identifying the most appropriate risk management measures, the following shall be ensured:\n\n(a) elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system;\n\n(b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;\n\n(c) provision of information required pursuant to Article 13 and, where appropriate, training to deployers.\n\nWith a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.\n\n6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.\n\n7. Testing procedures may include testing in real-world conditions in accordance with Article 60.\n\n8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.\n\n9. When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups.\n\n10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established pursuant to that law.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex contextualise lifecycle risk management, testing, and proportionality for high-risk AI. Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Maintain a living risk register linked to harms (safety + fundamental rights).
- Define acceptance criteria for residual risk per use case and document sign-off.
- Run documented tests with metrics; version control datasets and model checkpoints used for evidence.
- Map foreseeable misuse and abuse scenarios; include prompt-injection or data-poisoning where relevant.
- Connect risk controls to logging (Article 12) and incident reporting (Article 73) workflows.
Stress-test your risk story against Annex III with our free assessment.
Start Free AssessmentRelated Articles
Article 6: Classification Rules for High-Risk Systems
Article 7: Amendments to Annex III
Article 8: Compliance with the requirements
Article 10: Data and Data Governance
Article 11: Technical Documentation
Article 14: Human oversight
Article 72: Post-Market Monitoring
Annex III: High-Risk AI System Areas
Annex IV: Technical Documentation for High-Risk AI Systems
Related annexes
- Annex IV — Technical documentation
Frequently asked questions
Is ISO 42001 enough for Article 9?
A management-system certificate can help operationally, but you still need AI Act-specific evidence: lifecycle risk analysis tied to Annex III harms, metrics, and documentation required by Articles 11–15 and Annex IV.
Who owns the risk file at deployers?
Providers carry primary design-stage obligations. Deployers must use systems according to instructions and may need to supplement monitoring under Article 26; significant customisation can shift responsibilities along the value chain (Article 25).
Does the risk management system require ongoing monitoring after deployment?
Yes. Article 9(2) requires the risk management system to be a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system. Providers must update the risk assessment based on post-market monitoring data and feedback from deployers.