Chapter III, Section 3 — Obligations of providers and deployers of high-risk AI systems and of other partiesArticle 26

Article 26: Obligations of Deployers of High-Risk AI Systems

Applies from 2 Aug 202613 min readEUR-Lex verified Apr 2026

Article 26 is the deployer-side pillar of the AI Act for high-risk systems. It imposes operational and governance duties on any natural or legal person using a high-risk AI system under their authority in the Union. Key obligations include: using systems in accordance with instructions for use (paragraph 1), assigning human oversight to competent persons (paragraph 2), ensuring input data relevance (paragraph 4), monitoring for risks and incidents (paragraph 5), retaining logs for at least six months (paragraph 6), informing workers (paragraph 7), complying with EU database registration (paragraph 8), conducting DPIAs where required (paragraph 9), obtaining authorisation for post-remote biometric identification in law enforcement (paragraph 10), informing affected natural persons (paragraph 11), and cooperating with authorities (paragraph 12). Where a deployer makes substantial modifications or uses a system outside its intended purpose, the deployer may become a provider under Article 25.

Who does this apply to?

  • -Any natural or legal person using a high-risk AI system under their authority in the Union (except purely personal non-professional use)
  • -Healthcare, financial, HR, and public-sector organisations deploying Annex III systems
  • -IT, procurement, and compliance teams operationalising high-risk AI deployments
  • -Deployers who may become providers by making substantial modifications (Article 25 re-qualification)

Scenarios

A hospital deploys radiology AI but assigns no trained radiologist to review outputs or handle escalation.

Potential breach of Article 26(2): deployers must assign competent human oversight per Article 14 measures identified by the provider.
Ref. Art. 26(2)

A bank's AI credit scoring system retains automated logs for only 30 days; the Act requires at least six months.

Breach of Article 26(6): logs must be kept for a period appropriate to the intended purpose, at least six months unless Union or national law provides otherwise.
Ref. Art. 26(6)

A government agency procures an AI system for social benefit allocation and completes a fundamental rights impact assessment before deployment.

Aligned with Article 26(9) and Article 27 (FRIA) duties for public-sector deployers of high-risk systems.
Ref. Art. 26(9) + Art. 27

A company repurposes a recruitment AI (intended for CV screening) to also predict employee attrition without informing the provider.

May trigger Article 25 re-qualification: the deployer could become a provider for the modified system and inherit full Chapter III provider obligations.
Ref. Art. 25 + Art. 26

What Article 26 requires (in plain terms)

Article 26 builds the deployer half of the high-risk governance framework (providers carry the design-time obligations in Articles 8–21; deployers carry the operational duties). Key duties:

1. Use per instructions — implement the system as described in the provider's instructions for use (Article 13). 2. Human oversight — assign natural persons with competence, training, authority, and resources to exercise oversight as designed under Article 14. 3. Without prejudice — paragraphs 1 and 2 do not override other deployer obligations under Union or national law, nor the deployer's freedom to organise its own resources for implementing oversight measures. 4. Input data relevance — ensure input data is relevant and sufficiently representative in view of the intended purpose, to the extent deployers exercise control over it. 5. Monitoring — observe operation for risks and, where identified, inform the provider or distributor and suspend use if necessary; immediately inform provider and authorities of serious incidents. 6. Log retention — keep logs automatically generated by the system for at least six months (unless shorter or longer periods are set by Union or national law), appropriate to the intended purpose. 7. Worker information — before workplace deployment, inform workers' representatives and affected workers that they will be subject to the high-risk AI system. 8. EU database — public authorities and Union bodies must comply with Article 49 registration and must not use unregistered systems. 9. DPIA — carry out a data protection impact assessment under GDPR Article 35 where the processing triggers that obligation, using information from Article 13 instructions. 10. Post-remote biometric identification — law enforcement deployers must obtain prior judicial or administrative authorisation (or within 48 hours) for targeted biometric searches, with strict limitations. 11. Inform affected persons — deployers of Annex III systems making or assisting decisions related to natural persons must inform those persons they are subject to a high-risk AI system. 12. Cooperation — cooperate with competent authorities and provide information and access on request.

Relationship to providers and the value chain

Deployers are not passive customers. Under Article 25, a deployer that:

  • Puts their own name or trademark on a high-risk system already on the market
  • Makes a substantial modification to a high-risk system
  • Modifies the intended purpose so that the system becomes high-risk

…is treated as a provider and must comply with provider obligations (risk management, technical documentation, conformity assessment, etc.).

Contractual arrangements between provider and deployer should clearly allocate responsibilities—but regulatory duties cannot be disclaimed by contract.

How Article 26 connects to the rest of the Act

  • Article 13 — Deployers rely on instructions for use to operate lawfully; Article 13 is the provider's obligation to supply them.
  • Article 14Human oversight measures must be implemented by deployers as designed or identified by providers.
  • Article 25 — Deployer-to-provider re-qualification when substantial modifications or rebranding occur.
  • Article 27FRIA requirements for deployers that are public-sector bodies or entities acting on their behalf.
  • Article 49EU database registration and information duties.
  • Article 72 — Deployer monitoring feeds the provider's post-market loop.
  • Article 73Serious incident reporting by deployers.
  • Article 9 — Providers' risk analysis should anticipate deployer operational contexts.
  • Article 6 + Annex III — Whether deployer duties activate depends on high-risk classification.
  • Article 99Penalties for deployer infringements.
  • Article 113Application dates.

Official wording: Article 26 — Obligations of deployers of high-risk AI systems (English)

The following reproduces the complete text of Article 26 from the English consolidated text of Regulation (EU) 2024/1689 (OJ L 2024/1689).
1. Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6.
2. Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.
3. The obligations set out in paragraphs 1 and 2, are without prejudice to other deployer obligations under Union or national law and to the deployer's freedom to organise its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
4. Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
5. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities.
For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law.
6. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data.
Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law.
7. Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers' representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives.
8. Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor.
9. Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.
10. Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence.
If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted.
In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems.
This paragraph is without prejudice to Article 9 of Regulation (EU) 2016/679 and Article 10 of Directive (EU) 2016/680 for the processing of biometric data.
Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities.
Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment.
Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric identification systems.
11. Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply.
12. Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation.

Recitals (preamble) on EUR-Lex

The recitals in the same consolidated AI Act on EUR-Lex contextualise deployer responsibility, shared obligations across the value chain, worker information rights, and fundamental rights impact assessments. Use the official preamble on EUR-Lexdo not rely on unofficial recital lists without checking sequence and wording against the authentic text.

Compliance checklist

  • Maintain a deployer compliance matrix mapped to each Article 26 paragraph.
  • Contractually require providers to supply timely security patches, incident guidance, and updated instructions for use.
  • Assign named human overseers with documented competence, training records, and authority to intervene.
  • Validate that input data fed to the system is relevant and representative for your operational context.
  • Operationalise log retention (minimum six months) with privacy minimisation, access controls, and integrity measures.
  • Integrate AI monitoring into existing ITSM / SOC / incident reporting workflows (Article 73).
  • Conduct DPIA under GDPR Article 35 where automated decision-making or special-category data is involved.
  • For public-sector deployments involving Annex III systems, complete a FRIA under Article 27 before go-live.
  • Register use in the EU database under Article 49 where required.
  • Inform affected natural persons that they are subject to a high-risk AI system where required by national law or Article 26(11).

Generate a deployer compliance checklist for your rollout—free assessment.

Start Free Assessment

Related annexes

  • Annex III — High-risk AI system areas
  • Annex VIII — Registration information (Article 49)

Frequently asked questions

Are cloud customers always deployers?

Usually the organisation directing use is the deployer. Complex B2B chains require Article 25 analysis to determine who is provider, importer, or distributor in the value chain.

Can we delegate deployer duties to the provider by contract?

No. Regulatory obligations under the AI Act cannot be disclaimed by contract. You can allocate operational tasks, but the deployer remains responsible under the Regulation.

Do deployers need to conduct a FRIA?

Only certain deployers: bodies governed by public law, private entities providing public services, and deployers of credit scoring or life/health insurance systems under Annex III points 5(b) and 5(c). Check Article 27 for the exact scope.

What if the provider does not supply adequate instructions?

Deployers should escalate to the provider and, if unresolved, may need to notify market surveillance authorities. Operating without adequate Article 13 information creates compliance risk for the deployer.

How long must logs be retained?

At least six months, unless Union or national law provides otherwise. Align with GDPR data minimisation and your sectoral retention requirements.