Article 15: Accuracy, robustness and cybersecurity
Article 15 requires high-risk AI systems to achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Accuracy levels and metrics must be declared in the instructions for use. Robustness covers resilience to errors, faults, inconsistencies, and feedback-loop risks for continual-learning systems. Cybersecurity measures must address AI-specific vulnerabilities including data poisoning, model poisoning, adversarial examples, and confidentiality attacks.
Who does this apply to?
- -Providers designing, training, and hardening high-risk AI systems before placement
- -Deployers operating systems in environments that affect robustness and security exposure
Scenarios
A model continues online learning in production without controls on biased outputs feeding future training.
IFU claims 98% accuracy but internal tests show 85% on representative EU data.
What Article 15 requires (in plain terms)
Accuracy (paragraph 3) — choose metrics that fit the intended purpose; validate them against representative data and declare the levels and metrics in the instructions for use (IFU).
Commission benchmarks (paragraph 2) — the Commission shall encourage development of benchmarks and measurement methodologies in cooperation with stakeholders, metrology and benchmarking authorities. Track emerging standards that affect your evidence strategy.
Robustness (paragraph 4) — design for errors, faults, and inconsistencies that may occur within the system or its environment, particularly due to interaction with natural persons or other systems. Take technical and organisational measures accordingly. Robustness may be achieved through technical redundancy, including backup or fail-safe plans. For systems that continue to learn after placement on the market, develop them so as to eliminate or reduce the risk of biased outputs influencing future input (feedback loops), and ensure any such loops are addressed with appropriate mitigation measures.
Cybersecurity (paragraph 5) — the system must be resilient against attempts by unauthorised third parties to alter its use, outputs, or performance by exploiting system vulnerabilities. Technical solutions shall be appropriate to the circumstances and risks. Measures shall include, where appropriate, prevention, detection, response, resolution and control against: data poisoning (manipulating the training dataset), model poisoning (tampering with pre-trained components used in training), adversarial examples or model evasion (inputs designed to cause mistakes), confidentiality attacks, and model flaws.
How Article 15 connects to the rest of the Act
- Article 8 — Article 15 is a core Section 2 requirement framed by Article 8.
- Article 9 — Testing and metrics in Article 9 should evidence Article 15 claims.
- Article 10 — Dataset quality underpins accuracy and robustness.
- Article 13 — IFU must include Article 15 metrics and known circumstances affecting them (Article 13(3)(b)(ii)).
- Article 11 — Annex IV captures validation and cybersecurity evidence.
- Article 12 — Logs help detect robustness and security incidents in operation.
- Article 113 — Application dates.
Official wording: Article 15 (English)
The following reproduces Article 15 from the consolidated English text of Regulation (EU) 2024/1689. Always verify against EUR-Lex before compliance decisions.
1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.
2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies.
3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
4. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard.
The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.
5. High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities.
The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
The technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws.
Relevant recitals
Recital 74 explains that accuracy levels should be communicated to deployers and that the metrics should be appropriate to the intended purpose. Recital 75 stresses resilience to errors and inconsistencies arising from interaction with natural persons, other systems, or the operating environment. Recital 76 addresses cybersecurity, noting that high-risk AI systems face unique attack vectors — data poisoning can undermine training integrity, adversarial examples can bypass safeguards, and model theft can compromise confidential architectures. The recitals collectively require that providers go beyond generic IT security and implement measures proportionate to AI-specific threat models.
Compliance checklist
- Define accuracy metrics tied to intended purpose; document test datasets and acceptance thresholds.
- Run robustness testing for edge cases, distribution shift, and adversarial inputs relevant to your modality.
- Implement SBOM, patch management, and incident response aligned with Article 15(5) threats.
- If the system continues learning post-market, document feedback-loop controls per Article 15(4).
- Ensure IFU declared metrics match Annex IV evidence and monitoring dashboards.
Map obligations across articles—start the free assessment.
Start Free AssessmentRelated Articles
Article 8: Compliance with the requirements
Article 9: Risk Management System
Article 10: Data and Data Governance
Article 11: Technical Documentation
Article 12: Record-keeping
Article 13: Transparency and provision of information to deployers
Article 14: Human oversight
Article 16: Obligations of Providers of High-Risk AI Systems
Article 113: Entry into Force and Application Dates
Related annexes
- Annex IV — Technical documentation
Frequently asked questions
Is SOC 2 enough for Article 15?
Organisational security attestations help, but you still need AI-specific testing and evidence mapped to Article 15(4)–(5) and Annex IV.
Do we re-certify after every model fine-tune?
Substantial modifications can trigger reassessment under the conformity framework; minor updates may not, but always reassess accuracy/robustness claims and documentation.
How does Article 15 relate to the Cyber Resilience Act?
Article 15(5) requires cybersecurity measures appropriate to the circumstances and risks. For AI systems that are also products with digital elements, the Cyber Resilience Act applies in parallel. High-risk AI systems must address AI-specific threats — data poisoning, adversarial examples, model manipulation — alongside the standard vulnerability handling and patch management requirements of the CRA.