Chapter IV — Transparency obligations for providers and deployers of certain AI systemsArticle 50

Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems

Applies from 2 Aug 202611 min readEUR-Lex verified Apr 2026

Article 50 creates transparency duties for AI systems that interact with natural persons (paragraph 1), that generate synthetic content requiring machine-readable marking (paragraph 2), that perform emotion recognition or biometric categorisation (paragraph 3), or that generate or manipulate image, audio, or video content constituting deepfakes (paragraph 4). Paragraphs 5–7 set timing, non-prejudice, and codes-of-practice provisions. Unlike the Chapter III high-risk regime, Article 50 does not require full conformity assessment, risk management, or technical documentation—but the obligations are legally binding and subject to penalties. The core principle: people must know when they interact with AI or when content has been artificially generated or manipulated.

Who does this apply to?

  • -Providers of AI systems designed to interact directly with natural persons (chatbots, voice assistants, etc.)
  • -Deployers of AI systems that interact with natural persons (presentation/disclosure duty)
  • -Providers of AI systems that generate or substantially manipulate image, audio, video, or text content
  • -Deployers using AI-generated or manipulated content (deepfakes) in public-facing outputs
  • -Deployers of emotion recognition or biometric categorisation systems outside the high-risk or prohibited tiers

Scenarios

A customer support widget uses a large language model but presents itself as 'Alex from support' with no AI disclosure.

Likely Article 50(1) breach—natural persons must be informed they interact with an AI system, unless obvious from the circumstances and context of use.
Ref. Art. 50(1)

A marketing team synthetically alters a spokesperson's lip-sync in a video advertisement without any visible label or metadata.

May trigger Article 50(4) disclosure: deployers using AI to generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated.
Ref. Art. 50(4)

An employer installs emotion-recognition cameras in the lobby to infer visitor mood for optimising front-desk staffing.

Article 50(3) requires informing exposed persons and processing personal data in accordance with applicable Union and national law (also cross-check Article 5 workplace emotion ban).
Ref. Art. 50(3)

A news outlet publishes an AI-generated satirical image clearly labelled as parody in the editorial context.

May benefit from the Article 50(4) exception for evidently artistic, creative, satirical, fictional, or analogous works—but the machine-readable labelling obligation under Article 50(2) still applies to the provider.
Ref. Art. 50(4) + Art. 50(2)

What Article 50 requires (in plain terms)

Article 50 addresses four main transparency scenarios plus procedural provisions:

Paragraph 1 — AI interaction disclosure: Providers must design AI systems that interact directly with natural persons so that people are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use to a reasonably well-informed, observant and circumspect person. Exception for AI systems authorised by law for criminal offence detection/prevention/investigation/prosecution (subject to safeguards), unless available for public crime reporting.

Paragraph 2 — Machine-readable content marking: Providers of AI systems (including GPAI systems) generating synthetic audio, image, video, or text content must ensure outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. Technical solutions must be effective, interoperable, robust, and reliable to the extent technically feasible. Does not apply to assistive editing functions or where authorised by law for criminal offence purposes.

Paragraph 3 — Emotion recognition and biometric categorisation: Deployers of emotion recognition or biometric categorisation systems must inform exposed natural persons of the operation of the system, and process personal data in accordance with applicable Union and national law. Exception for systems permitted by law for criminal offence detection/prevention/investigation, subject to safeguards.

Paragraph 4 — Deepfake disclosure by deployers: Deployers using AI to generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated. Exceptions for law enforcement authorisation. For evidently artistic, creative, satirical, fictional, or analogous works, the obligation is limited to disclosure that does not hamper display or enjoyment. For AI-generated text published on matters of public interest, deployers must disclose artificial origin unless the content has undergone human review/editorial control with a person holding editorial responsibility.

Paragraph 5 — Timing and accessibility: Information under paragraphs 1–4 must be provided clearly and distinguishably at the latest at first interaction or exposure, conforming to accessibility requirements.

Paragraph 6 — Non-prejudice: Paragraphs 1–4 do not affect Chapter III requirements and are without prejudice to other transparency obligations under Union or national law.

Paragraph 7 — Codes of practice: The AI Office shall encourage codes of practice for detection and labelling of artificially generated or manipulated content. The Commission may adopt implementing acts to approve or specify common rules.

Article 50 vs Article 13 vs Article 5 (common confusion)

Teams sometimes confuse three different transparency-related provisions:

  • Article 13 = transparency for deployers of high-risk AI systems (instructions for use, operational limits).
  • Article 50 = transparency for users and the public interacting with certain AI systems (chatbot disclosure, deepfake labelling, emotion recognition notices)—not limited to high-risk.
  • Article 5(1)(f) = prohibition of emotion inference in workplaces and education—different from Article 50(3) emotion recognition notices.

A system can be subject to both Article 13 (if high-risk) and Article 50 (if it interacts with natural persons).

How Article 50 connects to the rest of the Act

  • Article 1 — Transparency obligations are one of the harmonised rule areas in Article 1(2)(d).
  • Article 2Scope determines which providers and deployers are caught.
  • Article 3 — Definitions of AI system, provider, deployer, biometric categorisation, emotion recognition.
  • Article 5 — Some emotion recognition and biometric uses are prohibited outright (check before applying Article 50).
  • Article 6 — If the system is also high-risk, Chapter III obligations apply in addition to Article 50.
  • Article 13 — Deployer-facing transparency for high-risk systems (different regime from Article 50).
  • Article 51 — GPAI models have separate Chapter V obligations; downstream systems built on GPAI can trigger Article 50.
  • Article 99Penalties for transparency infringements.
  • Article 113Application dates (Chapter IV applies from 2 August 2026).

Official wording: Article 50 — Transparency obligations for providers and deployers of certain AI systems (English)

The following reproduces the complete text of Article 50 from the English consolidated text of Regulation (EU) 2024/1689 (OJ L 2024/1689).
1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence.
2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.
3. Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law.
4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
5. The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements.
6. Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be without prejudice to other transparency obligations laid down in Union or national law for deployers of AI systems.
7. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The Commission may adopt implementing acts to approve those codes of practice in accordance with the procedure laid down in Article 56(6). If it deems the code is not adequate, the Commission may adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2).

Recitals (preamble) on EUR-Lex

The recitals in the same consolidated AI Act on EUR-Lex contextualise transparency, deepfake labelling, machine-readable marking, and the balance between disclosure and artistic freedom. Use the official preamble on EUR-Lexdo not rely on unofficial recital lists without checking sequence and wording against the authentic text.

Compliance checklist

  • Inventory every user-facing AI touchpoint (web chat, voice assistant, phone IVR, kiosk, API-served responses).
  • Implement clear, timely AI interaction disclosures—not buried in terms of service only.
  • Add machine-readable metadata or watermarking pipelines for synthetic image/audio/video/text generators (Article 50(2)).
  • For emotion recognition or biometric categorisation: inform exposed persons before or at the time of exposure; confirm GDPR lawful basis.
  • For deepfake-style content: label as AI-generated unless a legitimate exception (artistic, satirical, editorial) applies—confirm with counsel.
  • Train support, marketing, and content teams on the difference between Article 50, Article 13, and Article 5 obligations.
  • Review exceptions (law enforcement, obvious-from-context, artistic works) against the authentic Article 50 paragraphs before relying on them.

Check whether your AI system triggers Article 50 transparency duties—free assessment.

Start Free Assessment

Frequently asked questions

Does a 'Powered by AI' footer suffice for chatbot disclosure?

Only if it is clear at the moment of interaction and not contradicted elsewhere. The test is whether a 'reasonably well-informed, observant and circumspect' person would know they interact with AI. Conservative UX reduces risk.

Is Article 50 only about chatbots?

No. Article 50 covers four scenarios: AI-person interaction (chatbots, voice assistants), machine-readable content marking (synthetic media), emotion recognition and biometric categorisation notices, and deepfake labelling by deployers.

Does Article 50 apply to high-risk systems too?

Yes. If a high-risk system also interacts directly with natural persons, both Chapter III (high-risk) and Article 50 obligations apply. They are cumulative, not alternative.

What about AI-generated text?

Article 50(2) covers synthetic text as well as image/audio/video. Deployers publishing AI-generated text on matters of public interest must disclose the artificial origin unless the content underwent human editorial review with a person holding editorial responsibility.

Are there artistic exceptions for deepfakes?

Article 50(4) limits the transparency obligation for content that is part of an evidently artistic, creative, satirical, fictional, or analogous work or programme—disclosure must not hamper the display or enjoyment of the work. However, the provider's machine-readable marking under Article 50(2) still applies.