All articles
AI Act

AI Act Article 50 Transparency: disclosure checklist + copy/paste templates (Disclosure Pack v1)

Evergreen Article 50 guide for product teams: when disclosure applies, UX placement patterns, copy/paste strings, marking & watermarking options, logging fields, and evidence artifacts. Includes a downloadable Disclosure Pack PDF.

Legalithm Team9 min read
Share

AI Act Article 50 transparency: disclosure checklist + copy/paste templates (Disclosure Pack v1)

If your product shows AI-generated or AI-edited content to users, Article 50 transparency is not “a policy page thing”. It’s product UX: disclosures, labels, exports, and (where feasible) marking/watermarking that make it obvious what was produced or altered by AI.

Most teams fail Article 50 in predictable ways:

  • the disclosure is buried in terms,
  • some surfaces disclose (web) while others don’t (PDF exports, email, shared reports),
  • nobody knows what to log or what evidence to keep.

This guide is written for product teams who want something evergreen and ship-ready: what Article 50 means in practice, when it applies, and the exact copy/paste templates to implement consistently across surfaces.

EU lawmakers are still negotiating the Digital Omnibus, which could change some AI Act application timelines — but nothing is certain until text is published in EUR‑Lex. As of early May 2026, a 28 April political trilogue did not close a deal (see our stall update); your compliance programme should not pause.

Why we link this from an Article 50 guide: transparency work (labels, exports, logs, evidence) is mostly independent of whether a specific high‑risk date moves — buyers and users still ask “what did you ship, and can you prove it?”

TL;DR — what to do today (60 minutes)

  • Add a visible “AI-generated / AI-assisted” label next to AI outputs.
  • Add a one-sentence disclosure near the first interaction (not hidden).
  • Make exports safe: put a footer disclosure in PDFs/emails/reports.
  • Pick a marking approach for each modality (text/image/audio/video) and document it.
  • Save proof: screenshots + a sample export + a short “where it’s implemented” note.

Is your AI system high-risk?

Find out in 2 minutes — free, no signup required.

Take the free assessment

Get the Disclosure Pack PDF (v1)

If you want the printable version + the full checklist, download it here:

What Article 50 transparency means (plain English)

Article 50 requires transparency about:

  • when people interact with an AI system, and
  • when content is generated or materially modified by AI (especially if it could mislead).

In product terms, it means users shouldn’t have to guess:

  • “Was this produced by AI?”
  • “Did AI modify this?”
  • “Is this real (or synthetic)?”

Your best default is to disclose at the point-of-use, and ensure the disclosure survives exports and sharing.

The transparency rules discussed here come from Regulation (EU) 2024/1689 (Artificial Intelligence Act), in particular Article 50 (Chapter IV — transparency obligations for providers and deployers of certain AI systems). Verify against the authentic Union text:

Legalithm materials are operational guidance only and do not constitute legal advice.

Real-world example: A “rewrite this email” feature is compliant in the web UI, but the exported PDF contains no label or footer. Users receiving the PDF can’t tell it’s AI-assisted. Fix: add an export footer disclosure and test exports as part of QA.

Who this is for

This is for product and engineering teams shipping features like:

  • chat assistants and copilots
  • “generate summary / rewrite / translate”
  • AI-created images or videos
  • AI edits to user content (e.g., “improve”, “fix”, “expand”)

When Article 50 applies (decision rules you can use)

Use this practical decision tree:

  1. Do users interact with an AI system? (chat assistant, “generate”, “rewrite”, “summarize”, “translate”)
    → If yes, disclose at point-of-use.
  2. Does AI generate new content or materially modify existing content?
    → If yes, label the output and make exports disclosure-safe.
  3. Could the output be mistaken as human-made or real? (especially images/video/audio)
    → If yes, consider marking/watermarking where feasible.

If you’re unsure, ship the disclosure anyway. It’s low-cost and reduces user confusion.

1) Product/UI checklist (Article 50 “done” definition)

  • Disclosure is shown at the point of interaction (not buried in terms).
  • A visible “AI-generated / AI-assisted” label is present near AI outputs on every applicable surface.
  • Exports preserve disclosure/marking (PDF, email, downloadable reports).
  • Marking approach documented per modality (text/image/audio/video). If not feasible, rationale + mitigation documented.
  • Event logging exists for disclosure and exports (include version + surface).
  • Retention defined for logs and evidence artifacts (and who owns it).
  • Evidence artifacts collected: screenshots, sample exports, and a short implementation note.

Real-world example: A chat assistant shows a “AI-generated” badge, but a “Copy to clipboard” button is used heavily. Users paste content into external tools and publish it. Mitigation: add a pre-export warning for “Copy / Share / Publish” flows and keep a PDF/email footer disclosure for formal exports.

2) UX placement patterns (what users actually understand)

Patterns that work well across products:

  • Near output: a small “AI-generated” badge at the top-right of the output block (always visible).
  • Near action: on “Export”, “Send”, “Publish”, show a short warning banner and keep an export footer.
  • In lists/grids: if AI content appears in feeds or cards, include an “AI” label in the card meta.
  • In shared outputs: PDFs/emails/reports need a footer disclosure because UI chrome disappears.

Real-world example: A dashboard feed shows AI summaries in cards. Users scroll quickly and miss a disclosure that only appears inside the detail view. Fix: add a small “AI” label in the card meta (feed-level), not only inside the detail page.

Illustrative UI mockups (placement patterns)

Below are simple mockups — not product screenshots — for design and legal reviews.

Chat / assistant — first interaction

Illustrative mockup: chat window with a clear notice that the user is interacting with an AI system
Illustrative mockup: chat window with a clear notice that the user is interacting with an AI system

Pattern: Plain-language line with the first assistant turn, plus optional “How this works” for detail.

Export / share — before the file leaves your system

Illustrative mockup: export dialog including AI-generated content notice
Illustrative mockup: export dialog including AI-generated content notice

Pattern: When the bundle may include synthetic text, images, audio, or video, show a summary or checkbox and add a standard footer on the exported file.

PDF / document — persistent marking

Illustrative mockup: PDF-style page with footer disclosure and metadata hint
Illustrative mockup: PDF-style page with footer disclosure and metadata hint

Pattern: Footer (or first-page notice) on every PDF; pair visible text with machine-readable metadata where your stack supports it (Article 50(2) direction).

3) Copy/paste disclosure templates (UI string examples)

Use these as-is. Replace wording only if your legal/compliance owner requires it.

Inline badge (short)

Place next to AI output (not in a settings page).

AI-generated
AI-assisted
AI-edited

Tooltip / helper (1 sentence)

Place near the output or in the UI chrome around the AI feature.

This content was generated with the help of AI.

First-interaction disclosure (simple)

Use this at first use or when the AI feature is enabled.

You are interacting with an AI system. Outputs may be generated or modified by AI.

Pre-export banner (high-signal surfaces)

Use when a user publishes, exports, or sends AI-generated content.

AI-generated content
You are about to export content generated with AI. Please review for accuracy and ensure it meets your policies.
Buttons: Review / Continue

Put this in PDFs, emails, downloadable reports.

This document contains AI-generated or AI-assisted content.

Accessibility (aria-label example)

aria-label="AI-generated content disclosure"

4) Marking / watermarking (what to do)

For each surface and modality, pick a marking method and keep it consistent. If marking is not feasible, document the rationale and mitigation (e.g., export footer + disclosure).

Text

  1. Show a visible label near output and repeat disclosure in exports.
  2. If content is copyable, treat “export” as the boundary: add footer in PDFs/emails and log export events.

Images

  1. Apply a visible watermark (corner) when feasible.
  2. Add machine-readable metadata when supported (and log marking method).

Audio / Video

  1. Use visible overlay + metadata tags when feasible.
  2. For video exports, consider an intro/outro slate indicating AI-generated content.

“Not feasible” is acceptable — but document it

Sometimes you can’t guarantee marking survives (e.g., copy/paste into other tools). If so:

  • keep the UI disclosure + export footer, and
  • document “not feasible” with a short mitigation statement.

Real-world example: An image generator exports PNGs that could be reposted. If your product can do it, apply a visible corner watermark and add metadata tags. If not feasible, keep disclosure at point-of-use and ensure the download UI includes a clear “AI-generated image” label.

5) What to log (minimum fields + a simple table)

At minimum, capture: surface, disclosure version, marking method, export format, timestamp.

FieldWhy it matters
surfaceProves you covered web/app/export surfaces
disclosure_versionProves which copy was shown at the time
content_typeText/image/audio/video
marking_methodVisible label / watermark / metadata / not feasible
export_formatPDF/email/report/etc.
timestampAudit timeline

6) Evidence artifacts to keep (to prove you complied)

If someone asks “Show me you did Article 50,” you want to be able to answer in 5 minutes.

Screenshots

  • Disclosure shown on each surface (web/app).
  • Marking/watermark visible on outputs.
  • Pre-export warning banner/modal (if used).

Export samples

  • 1–2 PDFs/docs showing disclosure + markings preserved.
  • Sample email export with disclosure footer.

Logs (samples)

  • One example record for: disclosure shown, content marked (if applicable), export performed.

Policies / runbooks

  • When to mark vs when “not feasible” applies.
  • Where disclosure copy is configured and how it is versioned.
  • Retention policy (what, how long, owner).

Common pitfalls (and how to avoid them)

  1. Only disclosing in a policy page
    Fix: Put the disclosure next to the output and at first interaction.
  2. Forgetting exports
    Fix: Add a footer disclosure to PDFs/emails/reports and test exports in QA.
  3. Inconsistent labels across surfaces
    Fix: Keep one canonical copy and version it.
  4. No proof
    Fix: Save 3 screenshots + 1 export sample + 1 log sample.

Frequently Asked Questions (FAQ)

Do we need to do this if we only “assist” users (not fully generate)?

If AI materially changes the content, users can still be misled. Ship a label like “AI-assisted” and disclose near output.

Do we need watermarking for everything?

No. Use watermarking where feasible (especially media). For text, a visible label + export footer is often the practical baseline.

What’s the minimum we can ship safely?

  • A visible label near AI outputs
  • A one-sentence disclosure at point-of-use
  • Export footer disclosure
  • A few screenshots + one export sample

Should we localize disclosures?

Yes—if you ship in multiple languages. Keep semantics consistent and version the copy per locale.

Disclaimer

This post is provided for informational purposes and does not constitute legal advice.

AI Act
Article 50
Transparency
Compliance
Product

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment