Chapter VI — Measures in Support of InnovationArticle 57

Article 57: AI Regulatory Sandboxes

Applies from 2 Aug 20266 min readEUR-Lex verified Apr 2026

Article 57 requires each Member State to establish at least one AI regulatory sandbox at national level by 2 August 2026. Sandboxes provide a controlled environment for developing, testing, and validating innovative AI systems under regulatory oversight before placement on the market or putting into service. The AI Office, national competent authorities, or data protection authorities may also establish sandboxes. Participation is voluntary and does not exempt participants from AI Act obligations, but it does give them structured guidance on compliance expectations and a direct channel to supervisory authorities.

Who does this apply to?

  • -Member States required to establish at least one AI regulatory sandbox at national level by 2 August 2026
  • -Innovative AI providers and prospective providers seeking a controlled environment to develop and test AI systems before market placement
  • -National competent authorities operating and supervising sandboxes, including data protection authorities where personal data processing is involved

Scenarios

A health-tech start-up wants to test a diagnostic AI on real clinical data before seeking CE marking. It applies to the national competent authority's sandbox to validate performance under regulatory oversight, receiving compliance guidance before formal conformity assessment.

The sandbox provides a structured pathway: the start-up tests iteratively, the authority monitors safeguards, and both sides gain clarity on what the conformity file must contain.
Ref. Art. 57(1)

A Member State's data protection authority partners with the national AI competent authority to launch a sandbox focused on AI systems that process biometric data, ensuring GDPR and AI Act requirements are assessed in parallel.

The joint sandbox allows participants to receive coordinated guidance on both data-protection and AI-safety obligations, reducing regulatory fragmentation.
Ref. Art. 57(3)

What Article 57 does (in plain terms)

Article 57 creates the mandatory sandbox infrastructure across the Union. Each Member State must have at least one operational sandbox by 2 August 2026. Sandboxes serve three functions:

1. Innovation support — providers (and prospective providers) can develop, train, and test AI systems in a controlled environment with direct access to supervisory authorities. 2. Regulatory learning — competent authorities gain hands-on insight into how innovative AI systems work, improving their supervisory capacity. 3. Compliance preparation — sandbox participants receive guidance on regulatory expectations, which helps them prepare conformity documentation and risk management before formal market placement.

Critically, sandbox participation does not exempt anyone from compliance with the AI Act or other Union or Member State law. Authorities retain full supervisory and corrective powers over sandbox participants. The sandbox is a guidance channel, not a regulatory holiday.

Who can establish and participate in sandboxes

Sandboxes may be established by:

  • National competent authorities designated under Article 70
  • The AI Office (for Union-level or cross-border sandboxes)
  • Data protection authorities where the sandbox involves processing of personal data

Participation is open to providers and prospective providers of AI systems. "Prospective providers" captures entities still in the development or pre-market phase—meaning you do not need to be an established company with a finished product to apply. The sandbox may also involve deployers where testing requires real deployment conditions.

Member States are encouraged to involve SMEs and start-ups (see Article 62) and to coordinate cross-border sandboxes where AI systems affect multiple jurisdictions.

How Article 57 connects to the rest of Chapter VI and the Act

  • Article 58 — operational rules for how sandboxes function day-to-day (plans, exit criteria, confidentiality, safeguards).
  • Article 59 — special rules for further processing of personal data inside sandboxes for public-interest AI.
  • Article 60 — testing of high-risk AI in real-world conditions outside sandboxes (complementary pathway).
  • Article 76 — confidentiality obligations of authorities, including sandbox supervisors.
  • Article 113 — application timeline (sandbox obligation from 2 August 2026).

Practical guidance for sandbox applicants

If you plan to apply for a sandbox:

1. Identify the competent authority in your Member State responsible for operating the sandbox (this may differ from the market surveillance authority). 2. Prepare a sandbox plan describing your AI system, its intended purpose, the tests you want to run, the data you will use, and the compliance questions you need guidance on. 3. Map applicable rules — the sandbox does not waive any obligations. Come prepared with a draft conformity plan, including risk management (Article 9) and technical documentation (Article 11). 4. Engage data protection counsel if personal data is involved, especially if you plan to invoke Article 59 for further processing. 5. Document everything — sandbox outcomes (including authority feedback and test results) become valuable evidence for your conformity file.

Compliance checklist

  • Check whether your Member State has published its sandbox application process and timeline — the deadline for at least one operational sandbox is 2 August 2026.
  • Prepare a sandbox plan with clear objectives, testing methodology, data requirements, and expected outcomes before applying.
  • Confirm that all AI Act obligations still apply inside the sandbox — participation does not create exemptions from conformity assessment, risk management, or data governance.
  • If personal data will be processed in the sandbox, coordinate with your DPO and the relevant data protection authority; see Article 59 for special conditions.
  • Document all guidance received from sandbox supervisors and test outcomes — these feed directly into your conformity file and risk management system.
  • Review exit criteria and post-sandbox transition requirements before entering, so your path to market placement is clear from the start.

Planning a sandbox application? Map your AI system's obligations first with our free assessment.

Start Free Assessment

Frequently asked questions

Does participating in a sandbox mean my AI system is pre-approved?

No. Sandbox participation provides guidance and a controlled testing environment, but it does not constitute regulatory approval or conformity certification. You must still complete the applicable conformity assessment procedure before placing a high-risk AI system on the market.

What if my Member State has not established a sandbox by August 2026?

Article 57 places a binding obligation on each Member State to ensure at least one sandbox is operational by 2 August 2026. If your Member State is late, you may seek access to sandboxes in other Member States or those established by the AI Office, but the Commission may also take enforcement action for non-transposition.

Can deployers (not just providers) participate in sandboxes?

Article 57 primarily targets providers and prospective providers, but the sandbox framework can accommodate deployers where testing requires real deployment conditions. Check the specific terms of your national sandbox for eligibility.