EU AI Act vs GDPR: Key Differences, Overlaps, and How to Run Both Compliance Programs
TL;DR — AI Act vs GDPR at a glance
- Core difference: GDPR regulates personal data processing. The AI Act regulates AI systems themselves — their design, development, deployment, and use. The AI Act applies even when no personal data is involved.
- Fines accumulate independently: A single incident can trigger penalties under both regulations. Maximum GDPR fine: EUR 20M / 4% of turnover. Maximum AI Act fine: EUR 35M / 7% of turnover. These are additive, not alternative.
- Five major overlap areas: biometrics, automated decision-making, impact assessments (DPIA + FRIA), transparency obligations, and logging/record-keeping.
- Key divergence: The AI Act creates obligations for AI providers (developers) that have no GDPR equivalent. GDPR does not regulate software vendors as such.
- Practical approach: Run both as an integrated compliance programme — unified inventory, combined assessments, consolidated notices, coordinated incident reporting, shared governance.
- Both apply to most business AI: If your AI system processes personal data of EU individuals (which covers most commercial AI), you are subject to both regulations simultaneously.
Core difference: what each regulation governs
GDPR (Regulation (EU) 2016/679) regulates the processing of personal data. Its concern is data protection — lawfulness, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and confidentiality. GDPR applies whenever personal data of individuals in the EU is processed, regardless of the technology used.
EU AI Act (Regulation (EU) 2024/1689) regulates AI systems themselves — their design, development, placement on the market, deployment, and use. Its concern is safety, fundamental rights, and trust. The AI Act applies even when no personal data is processed. An AI system that optimises energy grid distribution using only anonymised infrastructure data is within the AI Act's scope but outside GDPR's scope.
The simplest distinction: GDPR asks "what are you doing with the data?" The AI Act asks "what is your AI system doing to people?"
Both questions can — and frequently do — apply to the same system simultaneously.
Comprehensive side-by-side comparison
Five major areas of overlap
1. Biometrics
GDPR: Classifies biometric data as a special category of personal data (Article 9), requiring explicit consent or a specific legal basis for processing. Biometric data used for identification purposes triggers the highest level of GDPR protection.
AI Act: Goes further than GDPR in three ways. First, it prohibits certain biometric practices entirely under Article 5 — real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), emotion recognition in workplaces and education, untargeted facial image scraping from the internet or CCTV, and biometric categorisation inferring sensitive attributes (race, political opinions, sexual orientation). Second, it classifies most other biometric AI as high-risk under Annex III. Third, it imposes transparency obligations on all biometric categorisation and emotion recognition systems under Article 50.
Practical impact: If you use biometric AI, you must comply with GDPR's strict legal basis requirements (explicit consent or Article 9(2) exception) and the AI Act's prohibitions, high-risk obligations, or transparency requirements simultaneously. A biometric access control system in an office building, for instance, requires a GDPR legal basis for processing biometric data, AI Act high-risk compliance (risk management, technical documentation, conformity assessment), and transparency notification to employees.
2. Automated decision-making and profiling
GDPR: Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Organisations must provide meaningful information about the logic involved, the significance, and the envisaged consequences. Individuals have the right to obtain human intervention, express their point of view, and contest the decision.
AI Act: Does not replicate GDPR Article 22, but regulates the AI systems that make such decisions — particularly when used in employment (Annex III point 4), credit and insurance (point 5), education (point 3), law enforcement (point 6), migration (point 7), and public services (point 8). The AI Act requires risk management, human oversight, technical documentation, accuracy metrics, and conformity assessment for these high-risk systems.
Practical impact: An AI system that scores credit applicants must comply with GDPR Article 22 (right to human intervention, meaningful information, right to contest) and the AI Act's full suite of high-risk obligations. The two regulations reinforce each other but are enforced by different authorities — the DPA for GDPR, the market surveillance authority for the AI Act.
3. Impact assessments: DPIA + FRIA
GDPR: Requires a Data Protection Impact Assessment (DPIA) under Article 35 when processing is likely to result in high risk to data protection rights. The DPIA must describe the processing, assess necessity and proportionality, identify risks to data subjects, and define measures to address those risks.
AI Act: Requires a Fundamental Rights Impact Assessment (FRIA) under Article 27 for certain deployers of high-risk AI systems (public bodies, essential service providers, education institutions). The FRIA covers the full breadth of the EU Charter — dignity, non-discrimination, children's rights, disability rights, access to justice, environmental protection — not just data protection.
Practical impact: Article 27(4) explicitly allows deployers to combine the FRIA and DPIA into a single assessment document. This is strongly recommended. Structure it as a unified document with two clearly labelled sections — one addressing GDPR data protection risks, the other addressing the broader Charter rights. See the detailed FRIA guide for the step-by-step process.
4. Transparency
GDPR: Requires privacy notices (Articles 13–14) explaining what data is collected, the purposes, legal basis, retention periods, data subject rights, and any automated decision-making. The information must be concise, transparent, intelligible, and easily accessible.
AI Act: Requires multiple layers of transparency:
- Informing users when they interact with an AI system (chatbots, deepfakes, emotion recognition) under Article 50
- Informing affected individuals that they are subject to a high-risk AI system under Article 26
- Providing comprehensive instructions for use to deployers under Article 13
- Labelling AI-generated content (synthetic media, deepfakes) in machine-readable format
Practical impact: Your transparency notices to individuals should cover both data processing elements (GDPR) and AI system elements (AI Act). For a chatbot that processes personal data, this means: GDPR notice about data collection and use, plus AI Act disclosure that the user is interacting with an AI system. Consolidate where possible to avoid notice fatigue.
5. Logging and record-keeping
GDPR: Requires records of processing activities (Article 30), documenting each processing operation, its purpose, categories of data, recipients, retention periods, and security measures. Individual data subjects have the right to access logs of their personal data processing.
AI Act: Requires automatic logging of high-risk AI system operations (Article 12) to enable traceability throughout the lifecycle. Deployers must retain these logs for at least six months (or longer if required by sector-specific law). Logs must be available to market surveillance authorities upon request.
Practical impact: AI system logs frequently contain personal data (inputs, outputs, user identifiers). This means the logs themselves become subject to GDPR — storage limitation, access rights, erasure obligations, and security requirements. Design your logging infrastructure to satisfy both the AI Act's traceability requirements and GDPR's data protection requirements from the outset.
Where they diverge
AI Act scope without personal data
The AI Act applies to AI systems that affect people even when no personal data is processed. Examples:
- An AI system that optimises energy grid load distribution using only infrastructure sensor data — high-risk under Annex III point 2 (critical infrastructure), but no personal data involved.
- An AI system that prioritises building safety inspections based on structural data — may be high-risk, but processes no personal data.
- An AI system that routes emergency vehicles based on traffic and geographic data — potentially high-risk, no personal data.
GDPR would not apply to any of these. The AI Act would.
Provider obligations with no GDPR equivalent
GDPR regulates data controllers and processors — it does not create direct obligations for software vendors as such. A company that develops a credit scoring algorithm but never processes personal data itself (because the bank deploying it is the controller) has no direct GDPR obligations.
The AI Act changes this entirely. Providers (developers who place AI systems on the market or put them into service under their own name) carry the heaviest compliance burden: risk management, data governance, technical documentation, conformity assessment, CE marking, registration in the EU database, and post-market monitoring. These obligations exist regardless of whether the provider processes personal data.
See the provider vs deployer obligations guide for the full comparison.
Fine structures and accumulation
GDPR and AI Act fines accumulate independently. A single incident can trigger penalties under both regulations, imposed by different authorities (DPA for GDPR, market surveillance authority for AI Act). There is no "double jeopardy" protection.
Calculation example: A bank deploys a biometric AI system for customer identification without conducting a DPIA (GDPR violation) or a FRIA (AI Act violation), and the system processes biometric data without a valid legal basis (GDPR violation) while failing to meet high-risk documentation requirements (AI Act violation).
Potential exposure:
- GDPR: Up to EUR 20M or 4% of global annual turnover (for special category data processing without valid basis)
- AI Act: Up to EUR 15M or 3% of global annual turnover (for non-compliance with high-risk obligations)
- Combined maximum exposure: EUR 35M or 7% of global annual turnover
For a company with EUR 500M annual turnover:
- GDPR maximum: EUR 20M (the 4% threshold yields EUR 20M, and the fixed amount is the same)
- AI Act maximum: EUR 15M (the 3% threshold yields EUR 15M)
- Combined maximum: EUR 35M
For a company with EUR 1B annual turnover:
- GDPR maximum: EUR 40M (4% of EUR 1B)
- AI Act maximum: EUR 30M (3% of EUR 1B)
- Combined maximum: EUR 70M
These are theoretical maximums, but they illustrate why integrated compliance — rather than treating each regulation in isolation — is a financial imperative.
Five strategies for running both programmes efficiently
Strategy 1: Unified AI and data processing inventory
Maintain a single inventory that captures both your data processing activities (GDPR Article 30) and your AI systems (AI Act). For each AI system, record:
- System name, version, and vendor
- Personal data processed (if any) — categories, volume, legal basis
- Data controller and processor roles (GDPR)
- AI Act provider and deployer roles
- Risk classification under the AI Act
- Whether a DPIA has been completed (GDPR)
- Whether a FRIA is required and completed (AI Act)
- Applicable obligations under each regulation
This eliminates the common problem of siloed inventories where the privacy team tracks data processing but is unaware of AI systems, and the AI team tracks systems but ignores data protection obligations. See the AI systems inventory guide for the detailed process.
Strategy 2: Combined impact assessments
Where both a DPIA and a FRIA are required, run them as a single process with two sections. The DPIA section covers data protection risks under GDPR. The FRIA section covers the broader fundamental rights under the EU Charter. Article 27(4) explicitly endorses this approach.
Benefits: avoids duplicating the system description, risk identification, and stakeholder consultation. Ensures that data protection risks and broader fundamental rights risks are assessed in context with each other. Creates a single document for governance review and authority inspection.
Strategy 3: Consolidated transparency notices
Draft transparency notices that cover both regulatory requirements in a single communication:
- GDPR elements: Identity of controller, purpose of processing, legal basis, data categories, retention periods, data subject rights (access, rectification, erasure, portability, objection)
- AI Act elements: Existence of AI system, intended purpose, human oversight measures, system limitations, right to explanation for AI-assisted decisions
For high-risk AI systems that process personal data, a consolidated notice reduces notice fatigue for individuals and ensures consistency between the two disclosures.
Strategy 4: Coordinated incident reporting
GDPR requires personal data breaches to be reported to the DPA within 72 hours (Article 33). The AI Act requires serious incidents involving high-risk AI systems to be reported to the market surveillance authority without undue delay (Article 73), and within 15 days for non-immediately-dangerous incidents.
A single AI-related incident — for example, a credit scoring AI that malfunctions and exposes applicant data while producing erroneous decisions — may trigger both notification obligations to different authorities. Design your incident response process to identify both triggers simultaneously, prepare parallel notifications, and ensure consistency between the two reports.
Strategy 5: Shared governance structure
Appoint a single internal team or function responsible for both GDPR and AI Act compliance. The DPO's expertise in data protection, impact assessments, and regulatory engagement is directly transferable to AI Act obligations. The AI Act does not mandate a specific compliance role (no "AI Officer" equivalent to the DPO), but leveraging existing GDPR governance structures is efficient:
- The DPO can coordinate both the DPIA and FRIA processes.
- The data protection team already maintains processing inventories — extending them to cover AI systems is incremental.
- Existing relationships with DPAs can inform engagement with market surveillance authorities.
- GDPR training programmes can be extended to cover AI Act obligations for relevant staff.
Practical scenarios
Scenario 1: Biometric access control AI
A company deploys facial recognition for office access control.
- GDPR obligations: Legal basis for biometric data (explicit consent or legitimate interest with balancing test under Article 6 + Article 9 exemption), DPIA, privacy notice to employees, data subject access rights, retention limitation, security measures.
- AI Act obligations: High-risk under Annex III (biometric identification). Provider must complete risk management, technical documentation, conformity assessment. Deployer must implement human oversight, retain logs for 6+ months, inform employees about the AI system.
- Overlap: Both require transparency to employees. Both require impact assessment. Logging must satisfy both traceability (AI Act) and data protection (GDPR).
Scenario 2: AI-powered credit scoring
A bank uses an AI system to assess consumer loan applications.
- GDPR obligations: Legal basis for financial data processing, Article 22 rights (human intervention, explanation, right to contest), DPIA if automated decisions have legal effects, privacy notice with information about automated decision-making logic.
- AI Act obligations: High-risk under Annex III point 5(a). Provider obligations: risk management, data governance, Annex IV documentation, conformity assessment, post-market monitoring. Deployer obligations: human oversight (trained credit officers reviewing AI recommendations), log retention, affected-person notification, FRIA (banks are essential service providers).
- Overlap: Both require explanation of automated decisions. Both require human involvement. Combined DPIA + FRIA assessment recommended.
Scenario 3: Customer service chatbot
A retailer deploys a chatbot for customer support.
- GDPR obligations: Privacy notice about data collected during conversations, legal basis for processing, retention policy, data subject rights.
- AI Act obligations: Transparency under Article 50 — inform users they are interacting with an AI system. If the chatbot processes personal data to make recommendations with significant effects (e.g., insurance claim routing), it may trigger higher obligations.
- Overlap: Both require informing the individual. Consolidate into a single notice at the start of the interaction: "You are chatting with an AI assistant. [Privacy notice link]."
Scenario 4: Internal workforce analytics
A company uses AI to analyse employee productivity and identify training needs.
- GDPR obligations: Legal basis for employee monitoring (typically legitimate interest with stringent balancing), DPIA (systematic monitoring of employees is a high-risk processing trigger), employee notification, works council consultation (in jurisdictions that require it).
- AI Act obligations: If the system evaluates employees for promotion, assignment, or termination decisions, it is high-risk under Annex III point 4. Full provider/deployer obligations apply. Emotion recognition in workplaces is prohibited under Article 5.
- Overlap: Both require impact assessment. Both restrict automated employment decisions. The AI Act prohibition on workplace emotion recognition adds a hard constraint that GDPR does not impose.
Common misconceptions
Misconception 1: "GDPR compliance means AI Act compliance"
GDPR compliance addresses data protection. The AI Act adds requirements that GDPR does not cover: risk management systems, technical documentation under Annex IV, conformity assessment, CE marking, post-market monitoring, registration in the EU database, and provider-specific obligations. A company fully compliant with GDPR may have zero AI Act compliance.
Misconception 2: "The AI Act only applies if we process personal data"
The AI Act applies to AI systems regardless of whether they process personal data. An AI system that allocates emergency resources based on geographic data, optimises traffic signals, or manages power grid stability may be high-risk under the AI Act without triggering GDPR at all. The two regulations have fundamentally different scope triggers.
Misconception 3: "We only need one impact assessment"
If a high-risk AI system processes personal data and the deployer is a public body, essential service provider, or education institution, both a DPIA (GDPR) and a FRIA (AI Act) are required. They can be combined into one document — and should be — but the combined assessment must address both the data protection risks covered by GDPR and the broader fundamental rights covered by the AI Act. A DPIA alone is not sufficient as a FRIA. See the FRIA guide.
Misconception 4: "Fines are capped at the higher of the two regulations"
Fines under GDPR and the AI Act accumulate independently. There is no provision for offsetting or capping combined penalties. A single incident that violates both regulations can result in separate fines from different authorities — the DPA for GDPR and the market surveillance authority for the AI Act. The combined theoretical maximum for prohibited AI practices involving special category data is EUR 35M (AI Act) + EUR 20M (GDPR) = EUR 55M in fixed amounts, or 7% + 4% = 11% of global annual turnover.
Misconception 5: "Our DPO can handle AI Act compliance as a side task"
While the DPO's expertise is directly relevant and the governance structures can be shared, AI Act compliance requires additional technical knowledge that most DPOs do not possess: understanding of AI system architectures, machine learning evaluation metrics, conformity assessment procedures, and post-market monitoring design. The DPO should coordinate, but engineering, data science, and product teams must be actively involved. See the compliance checklist for the full scope of work.
Key takeaway
The GDPR and the EU AI Act are complementary, not competing. GDPR protects personal data; the AI Act protects people from AI system risks. Most organisations using AI in the EU will be subject to both. Running them as an integrated compliance programme — shared inventories, combined assessments, consolidated notices, coordinated incident reporting, shared governance — is both legally sound and operationally efficient.
The organisations that will struggle most are those that treat the AI Act as a separate project, staffed by a separate team, with a separate inventory. Integration with existing GDPR governance is not just a convenience — it is the only practical path for most compliance teams facing the 2 August 2026 deadline.
Run the free AI Act assessment to identify your AI Act obligations alongside your existing GDPR programme.
For the full legal text of every AI Act provision, see the complete AI Act guide.
Frequently asked questions
Can a single incident trigger fines under both GDPR and the AI Act?
Yes. The two regulations are enforced by different authorities (DPAs for GDPR, market surveillance authorities for the AI Act), and penalties accumulate independently. For example, deploying a biometric AI system without a DPIA, without a FRIA, and without a valid legal basis for biometric data processing could result in separate fines under GDPR (from the DPA) and the AI Act (from the market surveillance authority). There is no mechanism for offsetting one fine against the other.
Which regulation takes precedence when they conflict?
The AI Act explicitly states (Article 2(7)) that it does not affect GDPR. Where both apply, organisations must comply with both. In practice, direct conflicts are rare — the two regulations address different dimensions of the same activity. The more common challenge is cumulative obligations (needing both a DPIA and a FRIA, both a privacy notice and an AI disclosure) rather than contradictory ones. Where apparent tension exists (e.g., AI Act logging requirements vs GDPR data minimisation), the practical resolution is to design logging to capture what the AI Act requires while applying GDPR principles (pseudonymisation, access controls, retention limits) to the logs.
Do I need separate compliance teams for GDPR and the AI Act?
Not necessarily, and in most cases a separate team is counterproductive. The recommended approach is a single governance structure that coordinates both programmes. The DPO and data protection team provide the foundation; engineering and data science teams provide the technical expertise the AI Act requires. Dedicated AI Act headcount may be needed for large organisations with many high-risk systems, but this should sit within or closely alongside the existing privacy function, not in a separate silo.
How should we handle AI systems that process no personal data but are high-risk under the AI Act?
These systems fall outside GDPR's scope entirely but are fully subject to AI Act obligations. The provider must complete risk management, technical documentation, conformity assessment, and post-market monitoring. The deployer must implement human oversight, retain logs, and — if applicable — conduct a FRIA. The absence of personal data simplifies some aspects (no DPIA needed, no data subject rights to manage) but does not reduce the AI Act obligations.
What about AI systems used internally that never reach the EU market?
If the AI system is used within the EU by a deployer, the AI Act applies — even if the system was developed internally and never sold commercially. "Placed on the market" and "put into service" are both triggers under Article 2. An internal AI tool used by an EU-based organisation to screen job applicants is "put into service" and is subject to AI Act deployer obligations if it is high-risk. GDPR also applies if the system processes personal data of EU individuals, regardless of whether the system is commercial or internal.
Is there a single certification that covers both GDPR and the AI Act?
No. GDPR offers voluntary certification mechanisms (Article 42) through accredited certification bodies, but these are optional and do not confer legal compliance. The AI Act requires mandatory conformity assessment for high-risk systems (Article 43), which is a separate process. ISO 42001 (AI management systems) provides a framework that touches both areas but is not a substitute for either GDPR compliance or AI Act conformity assessment. Organisations should pursue AI Act conformity assessment as the mandatory requirement and layer any voluntary certifications on top.
Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.
Check your AI system's compliance
Free assessment — no signup required. Get your risk classification in minutes.
Run free assessment


