All articles
AI Regulation Compared: EU, US, UK, China (2026)
AI Regulation

AI Regulation Compared: EU, US, UK, China (2026)

Compare global AI regulation in 2026. EU AI Act vs US, UK, and China approaches. Extraterritorial scope and multi-jurisdiction compliance strategy.

Legalithm Team28 min read
Share
Read time28 min
TopicAI Regulation
UpdatedApr 2026
Table of contents

AI Regulation Compared: EU, US, UK, China — A Global Compliance Guide for 2026

If your AI product serves users in multiple countries, you don't face one regulatory framework — you face several. The EU AI Act, the United States' patchwork of sector-specific rules and state laws, the United Kingdom's regulator-led adaptive approach, and China's content-focused and state-controlled regulations each take fundamentally different approaches to the same core question: how should governments manage the risks and benefits of artificial intelligence?

For companies developing or deploying AI systems across borders, the differences are not academic. A hiring algorithm that is perfectly lawful to operate in Singapore may require a conformity assessment in Brussels, an algorithmic audit in New York, and mandatory filing with the Cyberspace Administration of China. Understanding these divergences — and the growing areas of convergence — is essential for any organisation building a viable global compliance strategy.

This guide provides a detailed, side-by-side comparison of the four most consequential AI regulatory regimes in the world, along with practical guidance on how to build a single compliance programme that satisfies multiple jurisdictions simultaneously. If you are new to the EU AI Act specifically, start with our complete guide to the EU AI Act before diving into this global comparison.

TL;DR — Global AI regulation at a glance

  • No two major jurisdictions regulate AI the same way. The EU has a comprehensive, horizontal law. The US relies on existing agencies and voluntary frameworks. The UK delegates to sector regulators. China enacts narrowly targeted regulations at speed.
  • The EU AI Act has the broadest extraterritorial reach. It applies to any company placing an AI system on the EU market or whose system's output is used in the EU, regardless of where the company is headquartered — similar to the GDPR's global reach.
  • The United States has no single federal AI law. Compliance depends on which federal agency oversees your sector (FDA, SEC, FTC, EEOC) and which states you operate in (Colorado, Illinois, New York City each have distinct AI-related requirements).
  • The UK deliberately avoids a single AI statute. Instead, existing regulators — the FCA, ICO, Ofcom, CMA — apply five cross-cutting AI principles within their existing mandates.
  • China regulates AI content and algorithms directly. Three binding regulations already govern algorithmic recommendations, deepfakes, and generative AI, with mandatory government filing requirements.
  • Practical strategy: build to the EU AI Act as the highest common denominator, then map jurisdiction-specific requirements using NIST AI RMF and ISO 42001 to fill gaps.

The four major regulatory approaches

Before examining each jurisdiction in detail, the table below provides a high-level comparison of the four major AI regulatory regimes as they stand in 2026.

FeatureEUUSUKChina
Approach typeComprehensive, horizontal legislationSector-specific, fragmentedRegulator-led, principles-basedState-controlled, content-focused
Primary legislationAI Act (Regulation (EU) 2024/1689)No single federal AI law; Executive Orders, state lawsNo single AI Act; existing regulatory powersAlgorithmic Recommendation Regulation, Deep Synthesis Regulation, Generative AI Regulation
Risk frameworkFour-tier risk classification (prohibited → minimal)Varies by sector and agencyFive cross-cutting principles applied by sector regulatorsContent-risk and social-stability focused
Primary enforcement bodyAI Office + national competent authoritiesFTC, FDA, SEC, EEOC, state AGsFCA, ICO, Ofcom, CMA, DRCFCyberspace Administration of China (CAC)
Extraterritorial scopeYes — applies to non-EU providers placing AI on EU marketLimited — primarily domestic, some state laws reach out-of-state actorsLimited — primarily domesticYes — applies to services accessible in China
Maximum penaltiesEUR 35M or 7% global turnoverVaries by agency; FTC can impose significant fines; state penalties varyVaries by regulator (e.g., ICO GDPR fines up to £17.5M / 4% turnover)Fines, service suspension, criminal liability for severe violations
Voluntary standardsHarmonised standards under development; ISO 42001 referencedNIST AI RMF 1.0; ISO 42001 encouragedDSIT AI principles; AISI evaluationsNational AI standards (TC260)

This table reveals a fundamental truth: there is no single "global AI regulation." Each jurisdiction reflects different policy priorities — consumer protection, innovation promotion, content control, or fundamental rights — and organisations operating internationally must navigate all of them.

European Union: Comprehensive risk-based regulation

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive, binding legal framework for artificial intelligence. It entered into force on 1 August 2024, with obligations phasing in between February 2025 and August 2027. For a detailed timeline, see our EU AI Act compliance checklist for 2026.

Risk-based four-tier classification

The AI Act classifies AI systems into four tiers based on their potential risk to health, safety, and fundamental rights:

Risk tierRegulatory treatmentExamples
ProhibitedBanned outrightSocial scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), manipulative subliminal techniques
High-riskHeavy compliance obligationsAI in hiring and recruitment, credit scoring, critical infrastructure, law enforcement, migration management, education assessment
Limited riskTransparency obligationsChatbots, emotion recognition systems, deepfake generators
Minimal riskNo specific obligations (codes of conduct encouraged)Spam filters, AI-powered video games, inventory management

The classification rules are set out in Article 6, and the prohibited practices are defined in Article 5. If you're unsure how your system is classified, our free AI Act risk classification tool can help you determine your obligations in under five minutes.

Extraterritorial scope — the "Brussels Effect"

One of the AI Act's most consequential features is its extraterritorial scope, defined in Article 2. The regulation applies to:

  • Providers (developers) of AI systems that are placed on the EU market or put into service in the EU, regardless of where the provider is established.
  • Deployers (users) of AI systems that are located within the EU.
  • Providers and deployers located in third countries where the output produced by the AI system is used in the EU.

This means a US-based company that builds an AI hiring tool used by a German employer must comply with the AI Act — even if the company has no office, server, or employee in the European Union. This mirrors the GDPR's extraterritorial model and is widely expected to produce a "Brussels Effect": companies will build to the EU's standard globally rather than maintain separate compliance programmes for different markets.

GPAI model obligations

The AI Act introduced a separate category for general-purpose AI (GPAI) models — foundation models and large language models that can be adapted for many downstream tasks. Under Article 51, providers of GPAI models must:

  • Maintain and make available technical documentation, including training and testing processes.
  • Provide information and documentation to downstream providers integrating the GPAI model into their AI systems.
  • Establish a policy to respect copyright law, including the EU Copyright Directive.
  • Publish a sufficiently detailed summary of training data.

GPAI models presenting systemic risk — currently defined as models trained with more than 10²⁵ FLOPs of compute — face additional obligations, including adversarial testing, incident reporting, and cybersecurity protections. For a complete breakdown of GPAI obligations, see our guide to general-purpose AI model obligations.

Enforcement and penalties

The EU AI Act's penalty regime is among the strictest in the world. Maximum fines are:

  • EUR 35 million or 7% of global annual turnover for prohibited AI practices.
  • EUR 15 million or 3% of global annual turnover for violations of high-risk requirements.
  • EUR 7.5 million or 1% of global annual turnover for supplying incorrect information to authorities.

Enforcement is split between the EU AI Office (for GPAI model obligations) and national competent authorities designated by each Member State. The enforcement architecture is detailed in Article 99.

Why it matters

The EU AI Act matters beyond Europe's borders because of its scale and ambition. With 450 million consumers in the single market and a regulatory tradition that has already reshaped global data protection (GDPR) and product safety, the AI Act is positioned to become the de facto global baseline for AI governance. Companies that comply with the AI Act will find it substantially easier to meet requirements in most other jurisdictions.

United States: Sector-specific and innovation-first

The United States takes a fundamentally different approach to AI regulation: rather than enacting a single comprehensive law, it relies on existing federal agencies, voluntary frameworks, and an increasingly active state-level legislative landscape. The underlying philosophy prioritises innovation and economic competitiveness, with regulation targeted at specific harms rather than AI as a technology category.

No comprehensive federal AI law

As of April 2026, the United States has no binding, horizontal federal AI statute comparable to the EU AI Act. Federal AI governance is shaped by:

  • Executive Orders: Executive Order 14110 (October 2023) established reporting requirements for developers of powerful AI models and directed agencies to address AI risks in their respective domains. Subsequent executive actions have refined the scope, though executive orders can be modified or rescinded by subsequent administrations.
  • NIST AI Risk Management Framework (AI RMF 1.0): Published in January 2023, the AI RMF provides a voluntary, structured approach to managing AI risks across the lifecycle. It organises risk management into four functions — Govern, Map, Measure, Manage — and has become the primary reference framework for organisations seeking a systematic approach to AI governance in the US.
  • Agency-specific guidance: Individual agencies have issued binding or semi-binding guidance within their existing mandates, without waiting for new legislation.

Federal agency enforcement landscape

AgencyDomainAI-specific actions
FTCConsumer protection, unfair/deceptive practicesEnforcement actions against deceptive AI claims; guidance on algorithmic fairness; authority under Section 5 of the FTC Act
FDAMedical devices, health AIAI/ML-based Software as a Medical Device (SaMD) framework; pre-market review for AI diagnostic tools
SECFinancial services, investmentProposed rules on predictive data analytics in broker-dealer and investment adviser activities
EEOCEmployment discriminationGuidance on AI-driven hiring tools under Title VII; focus on disparate impact
HHS/OCRHealthcare, civil rightsAI nondiscrimination requirements in healthcare under Section 1557
CFPBConsumer financial servicesAdverse action notice requirements when AI is used in credit decisions

The practical consequence is that US AI compliance is sector-dependent. A healthcare AI company faces FDA oversight; an AI hiring platform faces EEOC scrutiny; a consumer-facing chatbot falls under FTC jurisdiction. Organisations must identify which agencies have oversight authority over their specific use case.

State-level laws — the emerging patchwork

The most significant legislative activity in the US is happening at the state level. Several states have enacted or are advancing AI-specific legislation:

  • Colorado AI Act (SB 24-205): Enacted in 2024 with enforcement beginning in 2026, this is the first comprehensive US state law directly regulating "high-risk AI systems" in consequential decisions (employment, lending, insurance, housing, education, healthcare). It requires deployers to conduct impact assessments and provide consumer notifications.
  • New York City Local Law 144: Requires annual bias audits for automated employment decision tools used by employers in New York City. Audit results must be publicly posted.
  • Illinois Biometric Information Privacy Act (BIPA): Though not AI-specific, BIPA's strict consent requirements for biometric data collection directly impact AI systems using facial recognition, voiceprint analysis, or other biometric processing.
  • California SB 1047: Originally proposed as a safety-focused bill for large AI models, SB 1047 went through significant amendments. California continues to be the most active state in AI policy, with multiple bills addressing transparency, watermarking, and deepfake disclosure.

The state-level patchwork creates compliance complexity for companies operating nationally. A company deploying the same AI system in New York, Colorado, and Illinois may face three distinct sets of obligations — bias audits, impact assessments, and biometric consent requirements — none of which are harmonised.

NIST AI Risk Management Framework

Although voluntary, the NIST AI RMF has become the closest thing to a national AI compliance standard in the United States. Its four core functions map reasonably well to the EU AI Act's requirements:

NIST AI RMF functionPurposeEU AI Act parallel
GovernEstablish AI governance structures and accountabilityAI governance and quality management (Arts. 9, 17)
MapIdentify and contextualise AI risksRisk management system (Art. 9), intended purpose definition
MeasureAssess and benchmark identified risksTesting, validation, performance metrics (Arts. 9, 15)
ManagePrioritise and act on risksCorrective actions, post-market monitoring (Arts. 20, 72)

Organisations building a governance framework that maps NIST AI RMF to the EU AI Act's requirements are well-positioned to demonstrate compliance in both jurisdictions.

United Kingdom: Regulator-led adaptive governance

The United Kingdom has deliberately chosen not to enact a single AI statute equivalent to the EU AI Act. Instead, the UK government published a pro-innovation AI white paper in March 2023, setting out a framework of principles that existing sector regulators are expected to apply within their domains.

Five cross-cutting principles

The UK's regulatory approach is built on five principles that all sector regulators are expected to interpret and enforce:

  1. Safety, security, and robustness — AI systems should function reliably and securely.
  2. Appropriate transparency and explainability — Users should understand when they are interacting with AI and how decisions are made.
  3. Fairness — AI should not produce discriminatory outcomes or undermine legal rights.
  4. Accountability and governance — Clear lines of responsibility must exist for AI outcomes.
  5. Contestability and redress — Individuals should be able to challenge AI-driven decisions that affect them.

These principles are intentionally high-level and non-prescriptive. The government has explicitly stated that it does not want to create a rigid, one-size-fits-all regulatory framework, arguing that sector regulators are better positioned to understand the specific risks AI poses within their domains.

Sector regulators and the DRCF

The following regulators are responsible for applying the five AI principles:

RegulatorDomainAI focus areas
ICO (Information Commissioner's Office)Data protection, privacyAI and personal data processing, algorithmic fairness, AI-generated profiling
FCA (Financial Conduct Authority)Financial servicesAI in trading, credit scoring, consumer communications
OfcomCommunications, online safetyAI-generated content, recommendation algorithms, deepfake regulation
CMA (Competition & Markets Authority)CompetitionAI foundation model market concentration, anti-competitive behaviour
MHRAHealthcare devicesAI-powered medical devices and diagnostics

The Digital Regulation Cooperation Forum (DRCF) — comprising the ICO, FCA, Ofcom, and CMA — coordinates cross-cutting AI regulatory issues to avoid conflicting guidance.

AI Safety Institute

The UK established the AI Safety Institute (AISI) in November 2023, initially focused on frontier AI model evaluations. AISI conducts pre-deployment testing of advanced AI models, publishes safety assessments, and contributes to international AI safety standards. While AISI does not have regulatory enforcement power, its evaluations are influential and increasingly referenced in procurement and governance frameworks.

Pro-innovation vs precautionary

The UK's approach is explicitly pro-innovation, contrasting with the EU's more precautionary stance. Key differences include:

AspectEU approachUK approach
LegislationComprehensive binding regulationPrinciples-based, largely non-statutory
Compliance burdenSignificant for high-risk AILighter, dependent on sector
EnforcementCentralised AI Office + national authoritiesDistributed across existing regulators
Innovation postureRegulatory sandboxes as safety valveRegulation as enabler of innovation
Speed of adaptationSlow (legislative amendment required)Fast (regulatory guidance can be updated quickly)

The UK approach has advantages in flexibility and speed of adaptation, but creates uncertainty for companies that prefer clear, codified rules. It also creates potential for regulatory fragmentation if different sector regulators interpret the five principles inconsistently.

China: State-controlled and content-focused

China has taken the most targeted and rapid approach to AI regulation, enacting a series of binding regulations that focus on specific AI applications — particularly those with implications for content control, social stability, and data sovereignty. Unlike the EU's comprehensive model or the US's fragmented approach, China's strategy is to regulate specific AI use cases quickly and iteratively.

Three binding AI regulations

China has enacted three major AI-specific regulations in rapid succession:

RegulationEffective dateScope
Provisions on the Management of Algorithmic Recommendations in Internet Information Services1 March 2022Algorithmic recommendation systems used in internet platforms
Provisions on the Management of Deep Synthesis in Internet Information Services (Deep Synthesis Regulation)10 January 2023AI-generated or AI-modified content: deepfakes, synthetic media, voice cloning
Interim Measures for the Management of Generative AI Services (Generative AI Regulation)15 August 2023Generative AI services offered to the public in China

Each regulation is administered by the Cyberspace Administration of China (CAC), often jointly with the Ministry of Science and Technology and the Ministry of Industry and Information Technology.

Key regulatory requirements

China's AI regulations share several distinctive characteristics:

  • Mandatory algorithmic filing: Organisations deploying algorithmic recommendation systems or generative AI services in China must register their algorithms with the CAC through the Algorithm Filing System. This includes disclosing the basic logic, intended purpose, and operating mechanisms of the algorithm.
  • Content control obligations: AI-generated content must adhere to "core socialist values" and may not contain content that subverts state power, undermines national unity, promotes terrorism, or violates other content restrictions. This is a fundamentally different regulatory objective than the EU's focus on fundamental rights or the US's focus on consumer protection.
  • Labelling and watermarking: The Deep Synthesis Regulation requires that AI-generated content be clearly labelled, including through technical watermarking. This parallels the EU AI Act's transparency obligations under Article 50 but is motivated by different policy goals.
  • Security assessments: Generative AI services must undergo a security assessment before being made available to the public, including evaluation of training data sources and output safety.
  • Data sovereignty: Training data must comply with China's Personal Information Protection Law (PIPL) and Data Security Law (DSL). Cross-border data transfers face strict restrictions, particularly for data classified as "important" or involving personal information of Chinese residents.

Social credit and algorithmic governance

China's AI regulations exist within the broader context of its social credit system and digital governance infrastructure. Algorithmic recommendation systems used by internet platforms — from e-commerce product suggestions to news feed ranking — are subject to user-right requirements including the ability to opt out of personalised recommendations and to request explanations of algorithmic decisions.

While these user-rights provisions have surface-level similarities to European transparency requirements, they operate within a regulatory framework where the state retains overriding authority to direct algorithmic outcomes in the interest of social stability.

Implications for foreign companies

Foreign companies offering AI services accessible in China face direct regulatory obligations. The Generative AI Regulation applies to services "provided to the public within the territory of the People's Republic of China," which includes cloud-based AI services accessible by Chinese users. Compliance requires:

  • Algorithmic filing with the CAC.
  • Training data compliance with PIPL and DSL.
  • Content output aligned with Chinese content regulations.
  • Cooperation with regulatory inspections and audits.

For many Western companies, meeting Chinese content-control requirements creates a direct conflict with EU and US regulatory expectations around freedom of expression and non-discrimination. This is one of the most challenging aspects of true multi-jurisdiction AI compliance.

Other notable jurisdictions

Beyond the four major regulatory regimes, several other jurisdictions are actively developing AI governance frameworks:

JurisdictionFrameworkStatus (2026)Key characteristics
CanadaArtificial Intelligence and Data Act (AIDA, Part 3 of Bill C-27)Enacted; implementation ongoingRisk-based classification; criminal penalties for reckless AI deployment causing serious harm; applies to "high-impact" AI systems
BrazilPL 2338/2023 (AI Regulatory Framework)Under legislative considerationRights-based approach; influenced by both EU AI Act and Brazilian data protection tradition (LGPD)
JapanAI Guidelines for Business (voluntary)In effectVoluntary, industry-led; emphasis on "Social Principles of Human-Centric AI"; no binding legislation
South KoreaAI Basic ActEnacted 2025Risk-based framework; AI impact assessments for high-risk systems; mandatory transparency
SingaporeModel AI Governance Framework (2nd ed.)In effect (voluntary)Practical, industry-friendly guidance; voluntary adoption; emphasis on explainability and fairness testing
OECDOECD AI Principles (2019, updated 2024)Adopted by 46+ countriesNon-binding international principles; foundational influence on EU AI Act and NIST AI RMF

The OECD AI Principles deserve special mention as they represent the closest thing to a global consensus on AI governance. Endorsed by over 46 countries including all G7 members, the principles — human-centred values, transparency, robustness, accountability, and inclusive growth — have directly influenced both the EU AI Act and the NIST AI RMF.

Extraterritorial scope: When foreign laws apply to you

One of the most practically important questions for any company deploying AI across borders is: which foreign laws apply to me? The answer depends on the specific regulation and the nature of your activities.

TriggerEU AI ActUS (Federal)US (State)UKChina
AI system placed on local marketYes — Article 2N/A (no federal law)Varies by stateYes (if deployed to UK users)Yes
AI output used by local residentsYes — even if provider is outside EULimitedVariesLimitedYes
No local establishment requiredCorrect — no EU presence neededN/AVaries (NYC LL144 applies to NYC employers)Generally applies to UK-based deployersCorrect — no Chinese presence needed
Authorised representative requiredYes — non-EU providers must appoint oneN/ANoNot currentlyNot formally, but local partner often required in practice

Key takeaway on extraterritorial reach

The EU AI Act and Chinese regulations have the broadest extraterritorial reach. If your AI system is used by EU residents or accessible by Chinese users, you are likely subject to those jurisdictions' rules regardless of where your company is headquartered. US regulations are primarily territorial but state laws like Colorado's AI Act can reach companies serving residents of those states. UK regulations are largely territorial, applying to deployers within the UK, but the ICO's data protection enforcement has extraterritorial dimensions through the UK GDPR.

Practical multi-jurisdiction compliance strategy

Building separate compliance programmes for each jurisdiction is unsustainable for most organisations. The more effective approach is to build a single, layered compliance programme anchored on the highest common denominator — the EU AI Act — and then add jurisdiction-specific requirements as needed.

Step 1: Anchor on the EU AI Act

The EU AI Act imposes the most comprehensive and prescriptive requirements of any current AI regulation. An organisation that fully complies with the AI Act will have addressed approximately 70–80% of requirements in other jurisdictions by default, because the AI Act requires:

  • Risk classification and management (Article 6, Article 9)
  • Technical documentation (Article 11)
  • Human oversight mechanisms (Article 14)
  • Transparency to users (Article 50)
  • Accuracy, robustness, and cybersecurity (Article 15)
  • Quality management systems (Article 17)
  • Post-market monitoring (Article 72)
  • Conformity assessment (Article 43)

Step 2: Map NIST AI RMF to fill US-specific gaps

Organisations operating in the United States should map their EU AI Act compliance documentation to the NIST AI RMF structure. Because NIST is voluntary, the goal is not legal compliance but demonstrating due diligence to US regulators in the event of an enforcement action or litigation. A strong AI governance framework should address both.

EU AI Act requirementNIST AI RMF mappingAdditional US action required
Risk management system (Art. 9)Map → Measure functionSector-specific risk assessments (FDA, EEOC)
Technical documentation (Art. 11)Map → Govern functionMaintain model cards aligned with NIST AI 100-1
Human oversight (Art. 14)Map → Manage functionDocument human-in-the-loop for EEOC defence
Bias testing (Art. 10)Map → Measure functionNYC LL144 bias audits, Colorado impact assessments
Transparency (Art. 50)Map → Govern functionFTC substantiation for AI marketing claims

Step 3: Layer jurisdiction-specific obligations

On top of the EU AI Act baseline and NIST mapping, add the jurisdiction-specific requirements that are not captured by either framework:

  • China: Algorithmic filing, content compliance review, security assessment, PIPL data localisation requirements.
  • UK: Sector-specific regulatory engagement (notify FCA if operating in financial services, ICO for data-intensive AI systems).
  • US states: NYC bias audit (Local Law 144), Colorado impact assessment (SB 24-205), Illinois BIPA biometric consent.

Step 4: Adopt ISO 42001 as the management system standard

ISO/IEC 42001:2023 (AI Management System) provides an internationally recognised framework for establishing, maintaining, and continuously improving an AI management system. Certification under ISO 42001 provides evidence of systematic AI governance that is recognised across jurisdictions and can simplify regulatory engagement in any market.

Step 5: Establish a regulatory monitoring function

The global AI regulatory landscape is evolving rapidly. Organisations need a dedicated function — whether an individual, a team, or an external provider — to monitor regulatory developments across their operating jurisdictions and update the compliance programme accordingly.

Real-world compliance scenarios

Scenario 1: US SaaS company selling AI HR tools in Europe

Real-world example: A San Francisco-based SaaS company develops an AI-powered resume screening tool and sells it to enterprise clients across the United States and Europe. The tool uses machine learning to rank job applicants based on predicted job performance.

Regulatory analysis:

  • EU AI Act: The tool is a high-risk AI system under Annex III, Section 4(a) — AI used in recruitment and selection of natural persons. The US company is a provider under the AI Act and must comply with the full set of high-risk obligations, including a conformity assessment, technical documentation, risk management system, bias testing, and human oversight mechanisms. The company must also appoint an authorised representative in the EU per Article 2. Maximum penalty for non-compliance: EUR 15 million or 3% of global turnover.
  • US federal: The tool falls under EEOC jurisdiction. Any disparate impact on protected groups under Title VII could result in enforcement action. The company should document validation studies and adverse impact analyses.
  • NYC Local Law 144: If any New York City-based employer uses the tool, the tool must undergo an annual independent bias audit, and the audit results must be publicly posted.
  • Colorado AI Act: If Colorado-based employers deploy the tool, the company must support deployers in completing impact assessments and providing consumer notifications.

Practical approach: Build the compliance programme around the EU AI Act's high-risk requirements, which will satisfy most US requirements by default. Add the NYC bias audit and Colorado impact assessment as supplementary obligations. Use NIST AI RMF documentation to demonstrate due diligence to US regulators.

Scenario 2: European company using a Chinese generative AI model

Real-world example: A Berlin-based marketing agency integrates a generative AI model developed by a Chinese technology company into its content creation workflow. The model is accessed via API and generates marketing copy for the agency's European clients.

Regulatory analysis:

  • EU AI Act: The Chinese model provider is subject to GPAI model obligations under Article 51 because the model is placed on the EU market through the API integration. The Berlin agency, as a deployer, has its own obligations including transparency to end users and compliance with the transparency obligations under Article 50. If the generated content could be mistaken for human-produced content, the agency must disclose its AI-generated nature.
  • Chinese Generative AI Regulation: The Chinese model provider must have completed algorithmic filing and a security assessment with the CAC. Training data must comply with PIPL. Content outputs must align with Chinese content regulations — which may create tensions when the model is used to generate content for European audiences, particularly on topics the Chinese government considers sensitive.
  • GDPR: If any personal data of EU individuals is processed by the Chinese model (including data sent via prompts), GDPR cross-border data transfer restrictions apply. An adequate transfer mechanism (such as Standard Contractual Clauses) is required.

Practical approach: The Berlin agency should conduct thorough due diligence on the Chinese model provider's compliance status under both the EU AI Act and Chinese regulations. Establish data processing agreements that prohibit personal data transmission to the model. Implement output review processes to ensure AI-generated content meets EU transparency requirements.

Scenario 3: Global enterprise deploying AI across all four jurisdictions

Real-world example: A Fortune 500 financial services company headquartered in London deploys AI systems across its operations in the UK, EU, US, and China. AI applications include credit risk scoring (EU and UK), fraud detection (US), algorithmic trading (UK and US), customer service chatbots (all jurisdictions), and anti-money laundering screening (all jurisdictions).

Regulatory analysis:

  • EU AI Act: Credit risk scoring is high-risk under Annex III. The company is both a provider (for proprietary AI systems) and a deployer (for vendor-provided systems). Full compliance with the high-risk framework is required, including conformity assessment and fundamental rights impact assessments.
  • UK: The FCA oversees AI in financial services. The company must demonstrate compliance with the five AI principles within the FCA's existing regulatory framework, including the Consumer Duty and Senior Managers and Certification Regime (SM&CR) accountability requirements.
  • US: The SEC oversees algorithmic trading and has proposed rules on predictive data analytics. The CFPB requires adverse action notices for AI-assisted credit decisions. State-level requirements apply for consumer-facing operations.
  • China: Customer service chatbots accessible in China fall under the Generative AI Regulation if they use generative AI. Anti-money laundering AI must comply with China's Data Security Law regarding cross-border data transfers of financial information.

Practical approach: Establish a centralised AI governance function reporting to the Chief Risk Officer. Build the compliance framework around the EU AI Act as the highest common denominator. Use ISO 42001 certification to demonstrate systematic governance to all regulators. Maintain a regulatory matrix that maps each AI system to its applicable jurisdictions and specific requirements. Engage with sector regulators proactively — particularly the FCA and SEC — rather than waiting for enforcement.

Frequently Asked Questions

Which AI regulation is the most strict?

The EU AI Act imposes the most comprehensive and prescriptive obligations, particularly for high-risk AI systems. Its combination of mandatory conformity assessments, extensive documentation requirements, and fines up to EUR 35 million or 7% of global turnover makes it the most demanding framework for regulated AI systems. However, China's regulations are arguably more restrictive in terms of content control and government oversight of algorithmic systems, requiring mandatory state filing and content alignment that has no equivalent in Western regulatory frameworks.

Does the EU AI Act apply to US companies?

Yes. The EU AI Act applies to any company — regardless of where it is headquartered — that places an AI system on the EU market or whose AI system's output is used within the EU. A US company whose AI product is used by customers in any EU Member State must comply with the applicable provisions. The scope is defined in Article 2. This extraterritorial reach mirrors the GDPR's approach to data protection.

Can I comply with all four jurisdictions using one compliance programme?

In practice, yes — but with caveats. A compliance programme built on the EU AI Act as the baseline will cover most requirements in other jurisdictions. However, certain obligations are jurisdiction-specific and cannot be satisfied by EU compliance alone: US state-level bias audits, Chinese algorithmic filing, and UK sector-specific regulatory engagement all require targeted action. The recommended approach is a layered compliance programme using ISO 42001 as the management system and the EU AI Act as the substantive baseline, with jurisdiction-specific modules added on top.

How does China's approach differ from the EU's?

The most fundamental difference is regulatory objective. The EU AI Act is designed to protect fundamental rights, create legal certainty, and foster trustworthy innovation. China's AI regulations are primarily designed to maintain content control, social stability, and state oversight of algorithmic decision-making. Practically, this means Chinese regulations focus heavily on what AI systems can say (content control), while the EU focuses on how AI systems work (process and risk management). Both have extraterritorial reach, but the compliance requirements are often difficult to reconcile — particularly around content control, where Chinese requirements may conflict with EU principles on freedom of expression.

What should I prioritise if I'm just starting multi-jurisdiction AI compliance?

Start with three actions: (1) Classify your AI systems under the EU AI Act's risk framework using a tool like our AI Act assessment. (2) Map your regulatory exposure by identifying which jurisdictions your AI systems touch — not just where you are incorporated, but where your users, data subjects, and AI outputs are located. (3) Build your governance framework using our AI governance framework guide and align it with both the EU AI Act and NIST AI RMF from the start. This gives you a solid foundation that can be extended to additional jurisdictions as needed.

Will there ever be a single global AI regulation?

It is unlikely in the foreseeable future. The OECD AI Principles represent the closest international consensus, but they are non-binding and high-level. The EU, US, UK, and China have fundamentally different governance philosophies — precautionary regulation, innovation-first, adaptive governance, and state control — that reflect deep-seated differences in political systems and policy priorities. The more realistic trajectory is regulatory convergence on principles (transparency, fairness, accountability) combined with continued divergence on specifics (enforcement mechanisms, content requirements, penalty regimes). Organisations should plan for a permanently multi-jurisdictional landscape and build compliance programmes accordingly.

Conclusion

Global AI regulation in 2026 is characterised by convergence on principles and divergence on implementation. The EU AI Act, US sector-specific approach, UK regulator-led governance, and China's content-focused regulations all acknowledge that AI systems require oversight — but they disagree profoundly on what that oversight should look like, who should provide it, and what values it should protect.

For organisations operating across borders, the practical implications are clear:

  1. Build to the EU AI Act as the global baseline. It is the most comprehensive framework and satisfying its requirements will address the majority of obligations elsewhere.
  2. Map NIST AI RMF and ISO 42001 to create a structured governance layer that demonstrates due diligence across jurisdictions.
  3. Maintain jurisdiction-specific compliance modules for obligations that cannot be satisfied by EU compliance alone — particularly Chinese content requirements and US state-level mandates.
  4. Invest in regulatory monitoring. The landscape is evolving rapidly; Brazil, South Korea, Canada, and others are advancing their own frameworks that may impose new obligations on your AI systems.
  5. Use the EU AI Act's risk classification as your starting point. Our free AI Act risk classification tool and compliance checklist for 2026 can help you establish a baseline within minutes.

The organisations that will navigate this landscape most successfully are those that treat multi-jurisdiction compliance not as a burden but as a competitive advantage — building trust, reducing regulatory risk, and demonstrating to customers, investors, and regulators worldwide that their AI systems are developed and deployed responsibly.

AI Regulation
Global
EU AI Act
US
UK
China
Comparison
International

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment