Chapter VII — GovernanceArticle 67

Article 67: Scientific Panel of Independent Experts

Applies from 2 Aug 20267 min readEUR-Lex verified Apr 2026

Article 67 of Regulation (EU) 2024/1689 establishes a scientific panel of independent experts to support enforcement activities under the AI Act. The panel provides technical advice to the AI Office on general-purpose AI (GPAI) model classification, systemic risk assessment, development of evaluation tools and methodologies, and alerts on possible risks from GPAI models. Panel members are selected for up-to-date scientific and technical expertise and must be free from conflicts of interest — their independence is essential to the credibility of the enforcement framework.

Who does this apply to?

  • -Independent AI experts selected to serve on the scientific panel
  • -The AI Office, which receives technical advice and risk alerts from the panel
  • -Providers of GPAI models subject to panel evaluations and systemic risk classification

Scenarios

A GPAI model provider releases a new foundation model with significantly increased parameter count and training compute. The AI Office asks the scientific panel to assess whether the model should be classified as presenting systemic risk under Article 51.

The scientific panel evaluates the model's capabilities and potential systemic impacts, and advises the AI Office on classification, drawing on benchmarks and evaluation methodologies it helped develop.
Ref. Art. 67(1)(a)

The scientific panel identifies emerging dual-use risks in a class of open-weight GPAI models capable of generating novel chemical synthesis instructions. It issues a qualified alert to the AI Office.

The AI Office initiates a review of the relevant providers' compliance with GPAI obligations under Chapter V and considers whether additional measures are needed.
Ref. Art. 67(1)(c)

What Article 67 does (in plain terms)

Article 67 creates a panel of independent scientific and technical experts whose primary job is to provide rigorous, evidence-based advice to the AI Office. The panel is not a policy or stakeholder body (that role belongs to the advisory forum) — it is an expert resource for enforcement-critical technical questions.

The panel's core tasks include:

1. (a) GPAI classification and systemic risk: advising the AI Office on whether a general-purpose AI model should be classified as presenting systemic risk, including assessing capabilities, reach, and potential harms. 2. (b) Evaluation tools and methodologies: contributing to the development of tools, benchmarks, and methodologies for evaluating GPAI model capabilities — particularly for models with systemic risk. 3. (c) Risk alerts: issuing qualified alerts to the AI Office when a GPAI model may pose specific risks at Union level. 4. (d) Model evaluations: advising on and supporting model evaluations carried out by the AI Office, including evaluations under Article 93.

Independence is paramount: panel members are selected based on up-to-date scientific or technical expertise and must be free from conflicts of interest. The selection criteria and procedures are set out in the article — see EUR-Lex Article 67.

How Article 67 connects to the rest of the Act

  • Article 51Classification of GPAI models with systemic risk: the scientific panel advises on whether a model crosses the systemic-risk threshold.
  • Article 55Obligations for providers of GPAI models with systemic risk: the panel's assessments feed into enforcement of these obligations.
  • Article 56Codes of practice for GPAI: the panel's technical findings may inform the substance of codes of practice.
  • Article 64AI Board tasks: the Board may draw on the panel's expertise indirectly via the AI Office.
  • Article 68Member State access to the pool of experts: national authorities can call on the same experts for national enforcement.
  • Article 93Compliance and enforcement powers: the panel supports AI Office evaluations under these provisions.
  • Article 113Application dates: the governance architecture applies on the timeline set out in Article 113.

Practical implications for GPAI model providers

If you develop or distribute general-purpose AI models:

  • Expect panel-informed evaluations: the AI Office may rely on the scientific panel's advice when assessing whether your model has systemic risk under Article 51. Prepare documentation on model capabilities, training data, and compute.
  • Monitor panel publications: the panel's work on evaluation methodologies and benchmarks will shape what the AI Office expects in practice. Tracking panel outputs lets you anticipate evaluation criteria.
  • Engage constructively with evaluations: if the AI Office initiates a model evaluation (see Article 93), the panel's technical findings will inform the outcome. Transparent cooperation is in your interest.

If you are a national authority, see Article 68 for how you can access the panel's expertise for your own enforcement activities.

Independence and conflict-of-interest safeguards

Article 67 places significant emphasis on independence. Panel members must:

  • Possess up-to-date scientific or technical expertise in AI-relevant disciplines.
  • Be free from conflicts of interest — including financial, professional, or institutional ties that could compromise objectivity.
  • Comply with confidentiality obligations regarding information obtained in the course of their duties.

These requirements are designed to ensure that the panel's advice carries scientific credibility and cannot be dismissed as captured by any particular interest group. The full selection and governance procedures are set out on EUR-Lex Article 67.

Compliance checklist

  • If you are a GPAI model provider, prepare technical documentation on model capabilities, training compute, and risk assessments that may be reviewed with panel input.
  • Monitor scientific panel publications for emerging evaluation methodologies and benchmarks relevant to systemic risk classification.
  • Ensure internal teams understand the distinction between the advisory forum (stakeholder input) and the scientific panel (independent technical expertise).
  • Track any qualified alerts issued by the panel — they may signal forthcoming enforcement action by the AI Office.
  • If you are a national authority, review Article 68 for the procedure to request technical support from the panel.
  • Verify that your GPAI model documentation addresses the criteria the panel is mandated to assess (capabilities, systemic impact, dual-use potential).
  • Align compliance timelines with the staged application dates in Article 113.

Assess whether your GPAI model triggers systemic risk obligations — start the free assessment.

Start Free Assessment

Frequently asked questions

Who sits on the scientific panel?

The panel is composed of independent experts selected for their up-to-date scientific or technical expertise in AI. They must be free from conflicts of interest. The Commission establishes the selection process and criteria as specified in Article 67.

Can the scientific panel force a GPAI model to be withdrawn from the market?

No. The scientific panel is advisory — it provides technical advice and risk alerts to the AI Office. Enforcement decisions (including ordering compliance measures or restricting market access) are taken by the AI Office and competent authorities under Articles 89–93, not by the panel itself.

How does the scientific panel relate to the advisory forum under Article 66?

They serve different functions. The advisory forum is a broad multi-stakeholder body (industry, civil society, academia) advising the AI Board and Commission on policy and practical implementation. The scientific panel is an independent expert body focused on technical assessments — especially GPAI model classification, systemic risk, and evaluation methodologies.