Privacy Policy Checklist for AI Companies: What GDPR and the AI Act Require in 2026
A privacy policy is not a formality. For companies that build or deploy AI systems, it is a binding transparency document that regulators actively scrutinise. In recent enforcement actions, data protection authorities have issued fines exceeding EUR 100 million for insufficient privacy notices — and AI-specific transparency failures are an emerging enforcement priority.
The challenge in 2026 is that AI companies now face overlapping transparency requirements from two major regulations: the GDPR (Articles 13–14 and the automated decision-making provisions in Article 22) and the EU AI Act (Article 50, which came into force for high-risk systems on 2 August 2026). Meeting one regulation's requirements does not automatically satisfy the other.
This guide provides a section-by-section checklist for building a privacy policy that covers both frameworks, with specific guidance on disclosing AI-powered features, automated decision-making, training data practices, and chatbot interactions.
TL;DR — What your AI company's privacy policy must include
- All GDPR Article 13/14 mandatory elements: identity of the controller, purposes and lawful bases, recipients, retention periods, data subject rights, and contact details.
- Clear disclosure of which features or processes use AI, including the logic involved, significance, and envisaged consequences (GDPR Article 13(2)(f) / Article 14(2)(g)).
- AI Act Article 50 transparency disclosures: notification when individuals interact with an AI system, when content is AI-generated, or when emotion recognition or biometric categorisation is in use.
- A description of AI training data practices — what data is collected, whether user data feeds model training, and how to opt out.
- Information about the right to human review of automated decisions with legal or significant effects.
- Separate disclosures for AI chatbots, clearly stating the user is interacting with an AI system.
- A documented update process — privacy policies must be kept current as AI features evolve.
What the GDPR requires in a privacy notice (Articles 13–14)
The GDPR mandates specific information that must be provided to data subjects. Article 13 applies when data is collected directly from the individual; Article 14 applies when data is obtained from other sources (e.g., scraped data, third-party APIs, purchased datasets).
Mandatory elements under Article 13
When you collect personal data directly from a user, your privacy policy must include:
- Identity and contact details of the controller (and representative, if applicable).
- Contact details of the DPO (if one is appointed).
- Purposes and lawful basis for each processing activity.
- Legitimate interests pursued (where Art. 6(1)(f) is the basis).
- Recipients or categories of recipients of the data.
- International transfer details — whether data is transferred outside the EEA and the safeguards in place.
- Retention period or criteria for determining it.
- Data subject rights — access, rectification, erasure, restriction, portability, objection.
- Right to withdraw consent (where consent is the lawful basis).
- Right to lodge a complaint with a supervisory authority.
- Whether data provision is a statutory or contractual requirement, and consequences of not providing it.
- Automated decision-making and profiling — the existence of automated decisions, meaningful information about the logic involved, and the significance and envisaged consequences.
Additional elements under Article 14
When data is not collected from the data subject (common for AI training data), you must also disclose:
- The categories of personal data processed.
- The source of the personal data (including whether it came from publicly available sources).
The critical AI clause: Article 13(2)(f) / 14(2)(g)
This is the provision most AI companies underdisclose. Where automated decision-making (including profiling) exists, the controller must provide:
- The existence of automated decision-making.
- Meaningful information about the logic involved.
- The significance and envisaged consequences for the data subject.
"Meaningful information about the logic involved" does not require disclosing source code or model weights. It means explaining, in terms a non-expert can understand, what factors the system considers, how the decision process works at a general level, and what impact it may have on the individual.
Is your AI system high-risk?
Find out in 2 minutes — free, no signup required.
Take the free assessmentAI Act transparency requirements (Article 50)
The EU AI Act's Article 50 adds transparency obligations that go beyond the GDPR:
These obligations apply regardless of risk tier. Even a minimal-risk chatbot must comply with the AI interaction disclosure rule.
Real-world example: A legal-tech company provides an AI assistant that drafts contract clauses. Under Article 50, the company must inform users they are interacting with an AI system. Under GDPR Article 13(2)(f), if the AI processes the user's data to personalise outputs or makes decisions about them, this must also be disclosed in the privacy policy.
Section-by-section privacy policy template for AI companies
Below is a 16-section template structure. Each section includes what to include and common pitfalls.
Section 1: Identity of the controller
State the company's full legal name, registered address, company registration number, and contact email. If you have a representative in the EU (required for non-EU companies processing EU data), include their details.
Section 2: Data Protection Officer
If you have appointed a DPO, provide their name (or title) and contact details. Even where a DPO is not mandatory, providing a data protection contact point demonstrates good practice.
Section 3: Categories of personal data collected
List all categories of personal data you process, including:
- Account data (name, email, phone number)
- Usage data (IP address, device identifiers, browser type)
- Content data (user inputs, uploaded files, messages)
- AI-specific data: inputs to AI features, AI-generated outputs linked to individuals, inferred or derived data (scores, classifications, predictions)
- Financial data (payment details, transaction history)
- Special category data (if any — health, biometric, political opinions, etc.)
Common mistake: Failing to list AI-derived data (e.g., risk scores, sentiment classifications, recommended categories) as a category of personal data. If this data is linked or linkable to an individual, it is personal data.
Section 4: Sources of personal data
Describe where you obtain data from:
- Directly from users (forms, account creation, user input)
- Automatically collected (cookies, analytics, server logs)
- Third-party sources (APIs, data brokers, public databases)
- AI training data sources (if your model is trained on user data, data from partners, or publicly sourced datasets)
This is where Article 14 applies for data not collected from the data subject.
Section 5: Purposes and lawful bases
For each processing purpose, state the specific lawful basis:
Common mistake: Using a single blanket lawful basis for all processing. Each distinct purpose requires its own lawful basis assessment.
Section 6: AI-powered features and automated decision-making
This is the most important AI-specific section and where most companies fall short. Include:
a) List of AI-powered features. Name every feature that uses AI or ML, described in plain language:
- "Our recommendation engine uses machine learning to suggest relevant content based on your browsing history."
- "Our fraud detection system uses automated analysis of transaction patterns to flag potentially fraudulent activity."
- "Our customer support chatbot uses a large language model to generate responses to your queries."
b) Logic involved. For each AI feature, explain at a general level how it works:
- What input data the system considers.
- What the system produces (output type).
- The general methodology (e.g., "pattern matching against historical data" or "natural language processing to interpret your query").
c) Significance and consequences. Explain what the AI decision means for the individual:
- "If our fraud detection system flags your transaction, it may be temporarily held for manual review, which could delay processing by up to 24 hours."
- "The recommendation engine influences which content appears first in your feed, but does not restrict access to any content."
d) Whether fully automated decisions are made. If any decisions are solely automated with legal or significant effects, state this explicitly and describe the safeguards in place (right to human review, right to contest, right to express a view).
Real-world example (good disclosure):
"We use a machine-learning model to assess loan eligibility based on your financial history, income data, and credit bureau information. The model produces a risk score that determines whether your application is approved, declined, or referred for manual review. If you receive an automated decline, you have the right to request human review of the decision, express your point of view, and contest the outcome. You can exercise these rights by contacting our data protection team at privacy@example.com."
Real-world example (bad disclosure):
"We may use AI and automated technologies to improve our services."
This discloses nothing meaningful — no specific features, no logic, no consequences, no rights.
Section 7: AI training data practices
If your company uses personal data to train or fine-tune AI models, disclose:
- What data is used for training (e.g., user interactions, uploaded content, usage patterns).
- Whether user data is used to train models — be explicit. "Your inputs to [feature name] may be used to improve our AI models" or "Your data is not used for model training."
- How to opt out of having data used for training purposes.
- Anonymisation and aggregation practices — explain whether data is anonymised or aggregated before training.
- Retention for training — how long training data is retained and when it is deleted.
Section 8: Recipients and data sharing
List categories of recipients:
- Sub-processors and service providers (hosting, analytics, payment)
- AI model providers (if you use third-party AI APIs — OpenAI, Anthropic, Google, etc.)
- Group companies
- Legal or regulatory authorities
- Business partners or advertisers (if applicable)
For third-party AI API providers, disclose that user data may be transmitted to the AI provider for inference, and whether the provider retains or uses that data for model training.
Section 9: International data transfers
Describe:
- Countries or regions where data is processed.
- The transfer mechanism used (adequacy decision, SCCs, BCRs).
- How to obtain a copy of the safeguards (typically "contact us at…").
- Specific mention of AI infrastructure providers and where they process data.
Section 10: Retention periods
Provide specific retention periods per data category. Avoid "as long as necessary" without further specificity. Include:
- Account data: for the duration of the account plus [X] years.
- AI training data: [specific period] or "until the model is retrained, after which raw data is deleted."
- AI-generated outputs: [specific period].
- Server logs: [specific period].
Section 11: Data subject rights
List all GDPR rights with practical guidance on how to exercise them:
- Right of access (Art. 15)
- Right to rectification (Art. 16)
- Right to erasure (Art. 17)
- Right to restriction of processing (Art. 18)
- Right to data portability (Art. 20)
- Right to object (Art. 21) — including the right to object to profiling
- Rights related to automated decision-making (Art. 22) — including the right to human intervention
- Right to withdraw consent (Art. 7(3))
- Right to lodge a complaint with a supervisory authority
Provide a clear mechanism for exercising rights (email address, web form, or in-product feature) and state the response timeframe (one month, extendable by two months for complex requests).
Section 12: Right to human review of AI decisions
If your AI system makes automated decisions with legal or significant effects, this deserves its own dedicated sub-section:
- Explain that the individual has the right to request human review of any automated decision.
- Describe how to request it (contact channel, expected timeframe).
- Explain what the human review process involves (a qualified reviewer examines the AI output and the underlying data to make an independent decision).
Section 13: AI chatbot and conversational AI disclosure
If you operate AI chatbots or virtual assistants, include a specific disclosure:
- That the chatbot is an AI system, not a human agent.
- What data the chatbot collects during conversations (inputs, conversation logs).
- Whether conversations are stored and for how long.
- Whether conversation data is used for model training.
- How to escalate to a human agent.
Under AI Act Article 50, this disclosure must also appear at the point of interaction (e.g., at the start of the chat session), not only in the privacy policy.
Section 14: Cookies and tracking technologies
Standard cookie disclosure including:
- Categories of cookies (strictly necessary, functional, analytics, marketing).
- Third-party cookies and their purposes.
- Link to cookie preference management.
Note: AI-powered personalisation that relies on cookies or tracking requires disclosure here and in the AI features section.
Section 15: Children's data
If your AI system could be used by children, describe:
- Age verification mechanisms.
- Parental consent requirements (under 16, or under 13–16 depending on Member State).
- Limitations on AI processing of children's data.
Section 16: Changes to this policy
- Describe how you notify users of material changes.
- Commit to a specific notification mechanism (email, in-app notification, website banner).
- Maintain a version history with dated changelog.
Good vs. bad AI disclosures: comparison
Common gaps in AI company privacy policies
- No mention of AI at all. The privacy policy reads as if no AI features exist, despite the product relying heavily on machine learning.
- AI training data opt-out missing. Users cannot find how to prevent their data from being used for model training — or the option does not exist.
- Vague logic descriptions. "We use AI to improve our services" conveys no meaningful information about what the AI does, what data it uses, or how it affects the individual.
- No Article 22 safeguards. Automated decisions with legal or significant effects are made, but the privacy policy does not mention the right to human intervention, the right to contest, or the right to express a view.
- Third-party AI providers undisclosed. User data is sent to OpenAI, Google, or other providers for inference, but this is not disclosed in the recipients section.
- AI-generated content not labelled. The company generates AI content (images, text, audio) without disclosing this to users, violating AI Act Article 50.
- No chatbot disclosure. The AI chatbot begins responding without informing the user they are communicating with an AI system.
- Missing retention periods for AI data. General retention periods are stated, but AI-specific data (training datasets, inference logs, conversation histories) is not addressed.
- Cookie policy disconnected from AI personalisation. AI-powered personalisation uses tracking data, but the cookie policy does not cross-reference the AI features section.
- No update mechanism. AI features evolve rapidly, but the privacy policy has no process for timely updates when new AI capabilities are launched.
Updating your privacy policy for the AI Act
With AI Act transparency obligations now enforceable for high-risk systems, and the Article 50 limited-risk transparency requirements applying broadly, AI companies should:
Step 1: Audit your AI features
Catalogue every feature that uses AI, ML, or automated decision-making. For each, document: what data it processes, what it outputs, and what effect it has on users.
Step 2: Map to regulatory requirements
For each AI feature, determine:
- Does GDPR Article 13(2)(f)/14(2)(g) apply? (automated decision-making disclosure)
- Does AI Act Article 50 apply? (AI interaction, content generation, emotion recognition, biometric categorisation)
- Is the system high-risk under the AI Act? (if so, additional transparency obligations for deployers under Article 26)
Step 3: Draft the disclosures
Write specific, plain-language disclosures for each AI feature using the section template above.
Step 4: Layer the information
Users should not have to read a 10,000-word privacy policy to find AI disclosures. Use a layered approach:
- First layer: Short AI disclosure at the point of interaction (chatbot greeting, feature tooltip, inference output screen).
- Second layer: Summary table of AI features in the privacy policy.
- Third layer: Full detailed disclosure in dedicated privacy policy sections.
Step 5: Establish a review cadence
Set a quarterly review cycle for your privacy policy. Any time a new AI feature is launched, assess whether the privacy policy needs updating before the feature goes live.
Real-world example: A recruitment platform adds an AI feature that ranks candidates by predicted job fit. Before launch, the privacy team updates the privacy policy to disclose: the feature's existence, the data it considers (CV, skills, experience), the general logic (matching candidate profiles against role requirements), the significance (ranking affects which candidates the employer sees first but does not automatically reject anyone), and the right to object. The chatbot that serves candidates is also updated with an AI interaction disclosure per Article 50.
Common mistakes when drafting AI privacy policies
- Copying a template without customising for AI. Generic privacy policy templates do not include AI-specific sections. They will leave you non-compliant from day one.
- Writing for lawyers instead of users. The GDPR requires that information be provided in a "concise, transparent, intelligible and easily accessible form, using clear and plain language" (Article 12). Legal jargon defeats the purpose.
- Updating the policy without notifying users. Material changes require notification. Adding AI training on user data, for example, is a material change that users must be informed about.
- Failing to distinguish between provider and deployer disclosures. If you deploy a third-party AI system, you need to disclose this separately from your own AI features, including the identity of the AI provider. See provider vs. deployer obligations.
- No version control. Maintain a changelog with dates. Regulators may ask for previous versions during an investigation.
- Burying AI disclosures in general sections. AI-specific information should be in a clearly labelled, easily findable section — not hidden in a general "how we use your data" paragraph.
Frequently asked questions
Do I need a separate AI disclosure or can I include it in my existing privacy policy?
You can include AI disclosures in your existing privacy policy, but they must be clearly identifiable and not buried in general text. Many companies create a dedicated "AI and Automated Decision-Making" section within their privacy policy, plus short in-product disclosures at points of AI interaction (as required by AI Act Article 50).
What counts as "meaningful information about the logic involved" under the GDPR?
You do not need to disclose source code, model weights, or trade secrets. You must explain: what categories of data the system uses as inputs, the general methodology (e.g., "pattern-matching," "classification model," "natural language processing"), what output the system produces, and what effect the output has on the individual. The standard is whether a reasonable non-expert would understand how the system works at a functional level.
Does the AI Act require me to explain my model's decision-making?
Article 50 focuses on disclosure of AI interaction, content generation, and biometric processing — not model explainability per se. However, for high-risk systems, Article 13 requires instructions of use that enable deployers to interpret system outputs. Combined with GDPR Article 22, this creates a practical obligation to make AI decisions explainable.
How often should I update my privacy policy?
Review quarterly and update whenever you launch a new AI feature, change how existing AI features process data, add new third-party AI providers, or change your training data practices. Material changes require user notification before they take effect.
What if I use a third-party AI API (e.g., OpenAI, Anthropic)?
Disclose: that user data is sent to a third-party AI provider for processing, the identity of the provider, the purpose (inference, model improvement), whether the provider retains or uses the data for its own training, the transfer mechanism if data leaves the EEA, and the data processing agreement in place. Check the provider's terms — some providers retain input/output data by default unless you opt out via their API settings.
Can my privacy policy also serve as my AI Act transparency notice?
Partially. Your privacy policy can include the disclosures required by the AI Act, but Article 50's in-context notification requirements (e.g., disclosing AI interaction at the start of a chatbot session) must happen at the point of interaction — the privacy policy alone does not satisfy this. Use a layered approach: immediate in-context notice plus full detail in the privacy policy.
To assess your full regulatory position under both the GDPR and the AI Act, run the free AI Act assessment and review the complete AI Act guide for the broader compliance picture. For a deeper look at how these two regulations interact, see EU AI Act vs GDPR: differences and overlap, or start with the EU AI Act compliance checklist.
Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.
Check your AI system's compliance
Free assessment — no signup required. Get your risk classification in minutes.
Run free assessment


