Article 13: Transparency and provision of information to deployers
Article 13 requires high-risk AI systems to be designed and developed so their operation is sufficiently transparent for deployers to interpret outputs and use them appropriately, with transparency commensurate to achieving Section 3 obligations of providers and deployers. Providers must ship instructions for use in an appropriate digital or other format—concise, complete, correct, and clear. Article 13(3) lists minimum IFU content: provider identity and contact details; characteristics, capabilities and limitations of performance (intended purpose, accuracy/robustness/cybersecurity metrics, foreseeable risk circumstances, explainability, group-specific performance, input data specs, output interpretation); pre-determined changes; Article 14 human oversight measures; computational and hardware resources, expected lifetime and maintenance; and Article 12 log collection mechanisms.
Who does this apply to?
- -Providers of high-risk AI systems who must draft and maintain instructions for use
- -Deployers who rely on Article 13 information to meet Article 26 duties
Scenarios
A credit decisioning model ships without documented calibration limits, known failure modes, or human override paths.
Instructions include confidence thresholds, escalation paths, and how to collect logs per Article 12.
Transparency and instructions (plain terms)
Paragraph 1 targets runtime transparency for deployers: the system must be transparent enough that deployers can interpret outputs and use the system appropriately, with an appropriate type and degree of transparency to support Section 3 duties.
Paragraph 2 mandates instructions for use (digital or otherwise) that are concise, complete, correct, clear, relevant, accessible, and comprehensible.
Paragraph 3 specifies minimum IFU content:
- (a) Provider identity and contact details (and authorised representative where applicable)
- (b) Characteristics, capabilities and limitations of performance, including: (i) intended purpose; (ii) accuracy, robustness and cybersecurity metrics per Article 15; (iii) foreseeable risk circumstances; (iv) explainability capabilities; (v) group-specific performance; (vi) input data specifications and training/validation/testing data information; (vii) output interpretation guidance
- (c) Pre-determined changes to the system and its performance
- (d) Human oversight measures per Article 14, including interpretation aids
- (e) Computational and hardware resources, expected lifetime, maintenance and software updates
- (f) Log collection, storage and interpretation mechanisms per Article 12
Treat the IFU as regulated product information, not marketing.
How Article 13 connects to the rest of the Act
- Article 12 — IFU must explain how deployers collect, store, and interpret logs where relevant (Article 13(3)(f)).
- Article 14 — Oversight measures referenced in the IFU must match what the system actually enables.
- Article 15 — Accuracy, robustness, and cybersecurity metrics declared in the IFU (Article 15(3) cross-link).
- Article 26 — Deployers depend on Article 13 to operate lawfully.
- Article 11 — Annex IV content and IFU must tell one story to authorities.
- Article 4 — AI literacy supports teams reading IFU material correctly.
- Article 50 — Different transparency tier; do not conflate with Article 13.
- Article 113 — Application dates.
Official wording: Article 13 — Transparency and provision of information to deployers (English)
The following reproduces the complete text of Article 13 from the English consolidated text of Regulation (EU) 2024/1689 (OJ L 2024/1689).
1. High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3.
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.
3. The instructions for use shall contain at least the following information:
(a) the identity and the contact details of the provider and, where applicable, of its authorised representative;
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
(i) its intended purpose;
(ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2);
(iv) where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output;
(v) when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;
(vi) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;
(vii) where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;
(c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;
(d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;
(e) the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;
(f) where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex contextualise deployer-facing transparency, instructions, and human oversight. Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Draft IFUs in languages deployers actually operate in.
- Document failure modes, uncertainty signals, and safe operating envelopes.
- Align UI affordances with oversight procedures promised in the IFU.
- Publish change logs when IFU content changes between versions.
- Train customer success teams on legally binding IFU sections.
Check whether your deployer pack meets high-risk expectations—free assessment.
Start Free AssessmentRelated Articles
Article 4: AI literacy
Article 8: Compliance with the requirements
Article 11: Technical Documentation
Article 12: Record-keeping
Article 14: Human oversight
Article 15: Accuracy, robustness and cybersecurity
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems
Article 113: Entry into Force and Application Dates
Related annexes
- Annex IV — Technical documentation
Frequently asked questions
Is a README enough?
Only if it systematically covers Article 13(3) items and cross-checks with Annex IV. Authorities expect structured, complete information.
Does Article 13 apply to APIs only?
If the high-risk system is placed on the market or put into service for a deployer, you still need deployer-appropriate instructions—format may differ, but substance cannot be omitted.
Must the instructions for use be in the deployer's national language?
Yes. Article 13(2) requires the information to be provided in a clear and intelligible form, which in practice means the instructions for use (Annex IV) must be in an official language of the Member State(s) where the system is placed on the market or put into service, as determined by the relevant Member State.