AI Act Provider vs Deployer: Who Is Responsible for What Under the EU AI Act?
TL;DR
- The AI Act assigns compliance obligations based on role, not company size or technology. The two primary roles are provider (develops or places AI on the market) and deployer (uses AI under its own authority).
- Providers bear the heaviest obligations: risk management, technical documentation, conformity assessment, CE marking, QMS, post-market monitoring, and EU database registration.
- Deployers must ensure human oversight, retain logs, inform affected persons, and (for public bodies and essential service providers) complete a fundamental rights impact assessment.
- A deployer becomes a provider under Article 25 in three scenarios: placing the system under its own brand, making a substantial modification, or changing the intended purpose to make it high-risk.
- Importers and distributors have their own verification and documentation duties.
- Misidentifying your role is one of the most common compliance failures — it affects every downstream obligation.
- Both providers and deployers face fines of up to EUR 15 million or 3% of global turnover for non-compliance with high-risk obligations.
One of the most consequential decisions under the EU AI Act is not about technology — it is about role. Are you a provider or a deployer? The answer determines the scope, cost, and urgency of your compliance obligations. Getting the role determination wrong cascades into every other compliance decision: wrong documentation scope, wrong conformity assessment, wrong notification obligations.
This guide breaks down each role's obligations side-by-side, explains the situations where a deployer becomes a provider, covers the frequently overlooked importer and distributor roles, and flags the most common misunderstandings with real-world examples.
Definitions: provider vs deployer vs importer vs distributor
The AI Act defines all supply chain roles in Article 3:
Provider (Article 3(3)): A natural or legal person that develops an AI system or a general-purpose AI model, or that has an AI system or GPAI model developed, and places it on the market or puts it into service under its own name or trademark — whether for payment or free of charge.
Deployer (Article 3(4)): A natural or legal person that uses an AI system under its authority — except where the system is used in the course of a personal, non-professional activity.
Importer (Article 3(6)): A natural or legal person located in the EU that places on the EU market an AI system that bears the name or trademark of a provider established outside the EU.
Distributor (Article 3(7)): A natural or legal person in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.
How to determine your role in practice
The role determination flows from what you do, not what you call yourself. Ask these questions:
- Did you develop, train, or commission the development of the AI system? → Likely provider.
- Did you place the system on the market under your own name or trademark? → Provider, even if someone else built it.
- Did you purchase, license, or subscribe to an AI system and use it within your own operations? → Likely deployer.
- Are you bringing a non-EU provider's AI system into the EU market? → Importer.
- Are you distributing an AI system you neither developed nor imported? → Distributor.
Real-world example — HR technology company: TalentScore Inc. develops an AI-powered CV screening tool and sells it to European employers under the "TalentScore" brand. TalentScore is the provider. Each employer that licenses and uses TalentScore for its own recruitment is a deployer. If TalentScore's European reseller purchases the product and makes it available to employers, that reseller is a distributor.
Real-world example — financial services: A European bank licenses an AI credit scoring model from a US-based fintech company. The US company is the provider. The bank is the deployer. If the bank's EU subsidiary formally imports the system (places it on the EU market bearing the US provider's name), the subsidiary is also the importer and has specific verification obligations.
Real-world example — consulting firm: A management consulting firm develops a custom AI system for a client's internal use. If the firm places it on the market under its own brand, the firm is the provider. If the firm builds it under a work-for-hire arrangement and the client places it under the client's brand, the client is the provider — even though the firm did the development.
Side-by-side obligations for high-risk AI systems
The table below covers the obligations applicable to providers and deployers of high-risk AI systems. For the full legal text of each provision, see the complete AI Act guide.
When a deployer becomes a provider: Article 25
This is one of the most underestimated provisions in the AI Act. Under Article 25, a deployer is treated as a provider — and assumes all provider obligations — in three scenarios:
Scenario 1: Placing the system under your own name or trademark
You license an AI engine from a vendor, integrate it into your product, and sell it as "YourBrand AI." You are now the provider of a high-risk AI system, even though another company built the underlying technology.
Concrete example: A European insurance company licenses an AI risk assessment model from a third-party vendor. The insurer integrates the model into its customer-facing platform and markets it to customers as part of its branded insurance application process. Because the system is placed on the market under the insurer's brand, the insurer becomes the provider under Article 25 and must fulfil all provider obligations — including technical documentation, conformity assessment, and CE marking.
Scenario 2: Making a substantial modification to a high-risk AI system
You take a deployed high-risk system and make a substantial modification that changes its performance characteristics, risk profile, or compliance with the original conformity assessment.
Concrete example: A bank deploys a vendor-provided AI credit scoring system. The bank's data science team retrains the model on proprietary data that includes alternative credit indicators not covered in the vendor's original training data. This retraining changes the model's performance characteristics and risk profile. The bank has made a substantial modification and becomes the provider of the modified system under Article 25.
What counts as "substantial"? The AI Act does not define a bright line, but recital 172 and Article 25(1)(b) indicate that a modification is substantial if it goes beyond the parameters pre-determined by the provider. Changes to the model's weights, training data, or intended deployment context are strong indicators. Routine maintenance, security patches, and parameter adjustments within the provider's specified range are generally not substantial.
Scenario 3: Modifying the intended purpose to make it high-risk
You deploy a non-high-risk AI system and repurpose it for a use case that falls into an Annex III domain, making it high-risk.
Concrete example: A company deploys a general-purpose chatbot for customer service (limited-risk, subject only to transparency obligations). The company then repurposes the chatbot as an initial screening tool for job applicants, routing candidates based on the chatbot's assessment of their responses. Recruitment screening falls into Annex III Domain 4 (employment). The company has changed the system's intended purpose to make it high-risk and becomes the provider under Article 25.
Consequences of becoming a provider
When Article 25 applies, the former deployer must:
- Comply with all provider obligations (risk management, documentation, conformity assessment, etc.)
- The original provider is obligated to cooperate: supplying necessary technical information, documentation, and access to facilitate the new provider's compliance
- The new provider must complete conformity assessment for the modified system or new intended purpose before continued market placement
This is not a theoretical risk. Organisations that fine-tune, rebrand, or repurpose AI systems without recognising the Article 25 trigger will face provider-level obligations retroactively — with no documentation trail and no conformity assessment on record.
Importer and distributor obligations
While providers and deployers receive the most attention, importers and distributors have enforceable obligations that should not be overlooked.
Importers (Article 23)
Before placing a high-risk AI system on the EU market, importers must verify that the provider has completed conformity assessment and technical documentation per Annex IV, the system bears CE marking, it is accompanied by the EU declaration of conformity and instructions for use, and the provider has appointed an authorised representative in the EU where applicable. Importers must indicate their name and address on the system or packaging. If the system is non-compliant, it must not be placed on the market.
Distributors (Article 24)
Before making a high-risk AI system available, distributors must verify CE marking, the EU declaration of conformity, and instructions for use. Distributors do not perform their own technical review but act as a quality gate: if they have reason to believe a system is non-compliant, they must not make it available until the issue is resolved.
Common role confusion scenarios
"We use a vendor's AI for hiring — are we the provider?"
No, you are the deployer — as long as you use the system within its intended purpose and under the vendor's name. But you have real and substantial obligations: assign competent persons for human oversight, notify job candidates they are subject to an AI system, retain system logs for at least 6 months, and (if you are a public body or essential service provider) complete a fundamental rights impact assessment. You must also verify the vendor has completed conformity assessment and can produce the EU declaration of conformity.
"We fine-tuned GPT-4 and offer it as a product"
You are the provider. You placed a modified system on the market under your own name. The full provider obligation stack applies. If the fine-tuned system falls into an Annex III domain (e.g., you built a legal advice tool used in dispute resolution), you must complete conformity assessment, prepare Annex IV technical documentation, and register in the EU database. The GPAI model provider (OpenAI, in this case) has separate obligations under Articles 51-55 but does not relieve you of your provider duties for the downstream application.
"We white-label an AI product from another vendor"
You are the provider under Article 25(1)(a) — placing the system on the market under your own name or trademark. The fact that another company developed the technology does not reduce your obligations. You need the original provider's full cooperation to obtain the technical documentation, training data governance records, and conformity assessment materials you need to fulfil your obligations.
"We are a SaaS platform — our customers configure the AI"
If you market the system and your customers deploy it, you are the provider and they are deployers. But if a customer substantially modifies the system (retraining, fine-tuning, or changing the intended purpose), they may become a co-provider under Article 25. Your instructions for use should clearly define the boundary between permissible configuration and substantial modification.
"We resell an AI system into the EU from outside the EU"
If you are established in the EU and make the system available on the EU market bearing the non-EU provider's name, you are an importer. If you are in the EU supply chain but did not import the system yourself, you are a distributor. Both roles carry verification obligations. If you place the system under your own brand, you become the provider regardless of your supply chain position.
"We only provide the AI model, not the application"
If you provide a general-purpose AI model that downstream developers integrate into applications, you are a GPAI model provider subject to Articles 51-55. You are not the provider of the downstream high-risk AI system — that responsibility falls on whoever integrates your model into the specific application and places it on the market. However, you must provide the downstream provider with sufficient technical information, documentation, and model capability disclosures to enable their compliance.
"We built an AI system for internal use only"
If you developed the AI system and put it into service within your own organisation (not placing it on the market for others), you are both the provider and the deployer. You bear the full set of provider obligations for development and the deployer obligations for use. There is no exemption for internal-use-only systems that fall into high-risk categories.
Supply chain compliance: who owes what to whom
The AI Act creates a chain of responsibility. Each actor in the supply chain has obligations to the next:
- GPAI model provider → must provide downstream providers with model cards, technical documentation, and capability disclosures
- AI system provider → must provide deployers with instructions for use, CE marking, EU declaration of conformity, and technical documentation access
- Importer → must verify the provider's compliance before placing the system on the EU market
- Distributor → must verify CE marking and documentation before making the system available
- Deployer → must verify the provider's compliance status, follow instructions for use, and maintain operational compliance
A break at any point in this chain creates risk for downstream actors. Deployers should not assume their providers are compliant — verification is an explicit legal obligation.
Practical steps for each role
If you are a provider
- Classify your AI systems by risk level — this determines the scope of your obligations.
- Build your Annex IV technical documentation early — it is the backbone of conformity assessment and the single most time-consuming compliance deliverable.
- Establish your risk management system as a living, iterative process — not a one-time assessment.
- Implement your quality management system covering the full lifecycle.
- Complete conformity assessment before the 2 August 2026 deadline.
- Register in the EU database (Article 49).
- Establish post-market monitoring processes and serious incident reporting channels.
- Prepare clear, comprehensive instructions for use for your deployers — this is a legal obligation, not a nice-to-have.
If you are a deployer
- Verify your provider's compliance: Request the EU declaration of conformity, CE marking evidence, and instructions for use. If the provider cannot produce these, escalate immediately — you are deploying a potentially non-compliant system.
- Assign trained human oversight personnel: Identify individuals with the competence, training, and authority to oversee the AI system's operation and intervene when necessary.
- Set up log retention: Retain automatically generated logs for at least 6 months (check national law for longer retention requirements).
- Inform affected individuals: Implement a process for notifying people that they are subject to a high-risk AI system before the system is used on them.
- Complete your FRIA if you are a public body, essential service provider, or education institution: Use the FRIA guide.
- Register your deployment if you are a body governed by public law (Article 49).
- Monitor and report: Watch for malfunctions, performance degradation, and misuse. Report serious incidents to the provider and (where required) to market surveillance authorities.
If you are an importer or distributor
- Establish a verification checklist: CE marking, EU declaration of conformity, instructions for use, and evidence of conformity assessment.
- Do not make non-compliant systems available on the EU market.
- Maintain records that demonstrate you performed the required verification.
- Report non-compliant systems to relevant market surveillance authorities.
Common mistakes in role determination
Mistake 1: Assuming the contract defines the role
Your commercial contract may call you a "customer" or "licensee," but the AI Act determines roles based on what you actually do. If you place the system on the market under your brand, you are the provider — regardless of what the license agreement says.
Mistake 2: Ignoring the Article 25 trigger
Many organisations fine-tune, rebrand, or repurpose AI systems without recognising they have crossed into provider territory. The compliance gap between deployer and provider obligations is enormous. Performing an Article 25 assessment should be part of every AI procurement and integration process.
Mistake 3: Failing to verify the provider's compliance
Deployers have an explicit obligation to verify that their provider has completed conformity assessment. "We assumed the vendor was compliant" is not a defence. Request documentation. If the vendor cannot provide it, treat it as a red flag.
Mistake 4: Neglecting the instructions for use
Providers must produce instructions for use. Deployers must follow them. In practice, instructions are often generic, inadequate, or ignored. Providers that produce insufficient instructions face enforcement risk; deployers that ignore adequate instructions lose their ability to claim good-faith compliance.
Mistake 5: Treating all AI systems as one role
An organisation can simultaneously be a provider for some AI systems and a deployer for others. A bank that develops its own fraud detection model (provider) and also licenses a third-party credit scoring tool (deployer) must manage both sets of obligations in parallel.
Determine your role and obligations
The first step is knowing where you stand. Run the free AI Act assessment to determine your risk classification and applicable role-based obligations.
For the full legal text of each provision, see the complete AI Act guide.
Frequently asked questions
Can a company be both a provider and a deployer at the same time?
Yes. If you develop an AI system (provider) and also use it within your own organisation (deployer), you bear both sets of obligations. This is common for companies that build internal AI tools for HR, credit assessment, or operations. You must complete provider obligations (documentation, conformity assessment, QMS) and deployer obligations (human oversight, affected person notification, log retention) for the same system.
What happens if my AI provider is based outside the EU?
You still have deployer obligations. Additionally, if the non-EU provider has not appointed an authorised representative in the EU, enforcement becomes more complex — but your obligations as deployer remain unchanged. As a practical matter, insist that your non-EU provider demonstrate compliance equivalent to the AI Act's requirements and provide contractual commitments to support your compliance.
Does fine-tuning always make me a provider?
Not necessarily. The question is whether your modification is "substantial" within the meaning of Article 25. Fine-tuning on a small dataset within the parameters specified by the original provider's instructions for use may not cross the threshold. Retraining on entirely new data, changing the model architecture, or altering performance characteristics almost certainly does. Document your assessment either way.
What obligations do deployers have for AI systems that are not high-risk?
For limited-risk AI systems (e.g., chatbots, deepfakes, emotion recognition not in an Annex III context), deployers must comply with transparency obligations under Article 50: inform users they are interacting with AI, label AI-generated content, and disclose deepfakes. For minimal-risk systems, there are no mandatory obligations, though the AI Act encourages voluntary codes of conduct.
How should providers and deployers handle disagreements about compliance responsibilities?
The AI Act does not create a contractual dispute resolution mechanism — it assigns obligations by role. If a deployer believes the provider has not fulfilled its obligations (e.g., inadequate instructions for use, missing conformity assessment), the deployer should: (1) formally notify the provider in writing, (2) escalate through contractual channels, and (3) if unresolved, report to the relevant market surveillance authority. Deployers cannot simply "inherit" the provider's non-compliance as an excuse for their own.
Is there a transition period for existing AI systems already on the market?
High-risk AI systems already placed on the market or put into service before 2 August 2026 must comply by that date — unless they are subsequently significantly modified. There is no grandfathering provision that exempts existing systems indefinitely. The compliance checklist in our 2026 compliance guide covers the steps for bringing existing systems into compliance.
Legalithm is an AI-assisted compliance workflow tool — not legal advice. Role determinations should be reviewed by qualified legal counsel.
Prüfen Sie die Compliance Ihres KI-Systems
Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.
Kostenlose Bewertung starten


