AI Risk Management System Under Article 9: The Complete Implementation Guide
TL;DR
- Article 9 of the EU AI Act mandates a comprehensive risk management system for every high-risk AI system. It is not a one-time assessment — it is a continuous, iterative process that spans the entire AI lifecycle, from initial design through post-market operation and decommissioning.
- The risk management system must identify and analyze known and reasonably foreseeable risks, evaluate risks arising from both intended use and reasonably foreseeable misuse, and integrate findings from post-market monitoring data in a continuous feedback loop.
- Residual risk — the risk that remains after mitigation — must be judged acceptable when weighed against the benefits of the AI system. Providers must communicate residual risks to deployers through instructions for use.
- Testing is not optional or cosmetic. Article 9(5)–(7) requires testing against pre-defined metrics, under real-world conditions where appropriate, and with specific attention to impacts on children and vulnerable groups.
- Article 9's requirements map closely to ISO 42001 (Clause 6.1) and the NIST AI RMF (GOVERN and MAP functions), enabling organizations to build a unified compliance framework.
- Enforcement of high-risk AI obligations begins 2 August 2026. Non-compliance can result in fines of up to EUR 15 million or 3% of global annual turnover.
- Use Legalithm's AI Act Assessment tool to determine whether your AI system qualifies as high-risk and what risk management obligations apply.
If you read only one article of the EU AI Act, make it Article 9. The risk management system is the foundation on which every other high-risk obligation rests. Technical documentation requires you to describe your risk management process. Data governance draws on risks you have identified. Human oversight must be calibrated to residual risks. Post-market monitoring feeds back into the risk management system. Every road leads to — and from — Article 9.
Yet risk management is where many organizations stumble, treating it as a compliance checkbox rather than the living, iterative process Article 9 demands. This guide breaks down every paragraph of Article 9, translates the legal requirements into implementation steps, maps them to international standards, and provides the tools you need to build a compliant risk management system.
Why Article 9 is the backbone of AI Act compliance
A foundational obligation for all high-risk AI
If your AI system falls into the high-risk category — whether through Annex I (as a safety component of regulated products) or Annex III (standalone high-risk use cases like biometric identification, critical infrastructure, employment, creditworthiness, or law enforcement) — Article 9 applies. Not sure whether your system qualifies? Our classification guide walks through the analysis, and Legalithm's AI Act Assessment can give you an answer in minutes.
Article 9 is the first substantive obligation in Section 2 (Articles 8–15). You cannot meaningfully comply with data governance, transparency, accuracy, robustness, or human oversight requirements unless you have first identified the risks your system poses.
Continuous, iterative, and lifecycle-spanning
Article 9(1) specifies that the risk management system shall be a "continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system." This rules out one-time assessments, static documents produced for conformity assessment and then archived, and ad hoc reviews triggered only by incidents. Instead, Article 9 demands a process that is planned (with defined activities, timelines, and responsibilities), iterative (each cycle informs the next), and lifecycle-spanning (from design through decommissioning). Medical device manufacturers have operated under similar frameworks (ISO 14971) for decades — Article 9 brings AI into the same paradigm.
August 2026 enforcement
Article 9 obligations become enforceable on 2 August 2026. Providers placing a high-risk AI system on the EU market on or after that date must have a fully implemented risk management system. Systems already on the market must be brought into compliance. For all enforcement dates, see our EU AI Act timeline.
Article 9 — the legal requirements explained
Establishing a risk management system (Article 9(1))
Article 9(1) requires that a risk management system be established, implemented, documented, and maintained — four verbs, four distinct obligations. The system must be formally created with defined governance and methodology (established), actively executed rather than merely described in policy (implemented), recorded in sufficient detail for auditors and regulators (documented), and kept current as the system and its environment evolve (maintained).
Article 9(1) further specifies "regular systematic updating" — updates cannot be sporadic or informal but must follow a defined cadence and methodology.
Identification and analysis of known and foreseeable risks (Article 9(2)(a))
Article 9(2)(a) requires identifying and analyzing known risks (documented in the state of the art, observed in testing, or flagged in prior versions) and reasonably foreseeable risks (those a competent professional would anticipate, even if not yet observed).
The scope covers health, safety, and fundamental rights — meaning risk identification must go beyond technical failure modes to include impacts on privacy, non-discrimination, freedom of expression, and other Charter rights. The Fundamental Rights Impact Assessment (FRIA) complements this analysis for deployers.
Estimation and evaluation of risks from intended use and foreseeable misuse (Article 9(2)(b))
Article 9(2)(b) requires estimating and evaluating risks from both intended purpose (the documented purpose the system is designed for) and reasonably foreseeable misuse (uses that are not intended but can be predicted based on human behavior and the system's capabilities). For example, a hiring-screening AI might be misused to filter candidates on protected characteristics correlated with legitimate criteria.
This requirement forces you to ask: "what happens when people use it in ways we did not intend but should have predicted?" Failing to consider foreseeable misuse is one of the most common compliance gaps.
Evaluation of risks from post-market monitoring data (Article 9(2)(c))
Article 9(2)(c) creates a formal feedback loop between post-market monitoring and risk management. Real-world data about performance, failures, near-misses, and incidents must flow back into the risk management system — revealing risks not identified pre-market, known risks manifesting differently than estimated, or new risks from environmental changes like distribution shift. Post-market monitoring is an input to risk management, not a parallel obligation.
Adoption of suitable risk management measures (Article 9(2)(d))
Article 9(2)(d) requires adopting "appropriate and targeted" risk management measures. Measures must be proportionate (catastrophic risks demand robust mitigation), specific (each identified risk needs a corresponding measure — generic statements like "we will monitor the system" are insufficient), and effective (actually reducing risk, not merely creating an appearance of reduction).
The AI Act does not prescribe specific measures, giving providers flexibility — but the burden of justification falls squarely on the provider if challenged by a market surveillance authority.
Residual risk — what's acceptable (Article 9(3)–(4))
Articles 9(3) and 9(4) establish three critical principles for residual risk:
- Interaction effects matter. Mitigation measures can interact — reducing one risk might increase another. The system must consider these interactions holistically.
- Balance is required. You must achieve an appropriate balance between compliance burdens and risks. Eliminating all risk is neither expected nor possible.
- Residual risk must be acceptable. After all mitigation, the remaining risk must be judged acceptable — for each individual hazard and for the overall system.
Acceptability is determined by weighing residual risk against benefits, the state of the art, societal expectations, and relevant harmonized standards. We address this in depth in a dedicated section below.
Testing to ensure consistent performance (Article 9(5)–(7))
Articles 9(5)–(7) impose four testing requirements:
- Purpose-driven — testing is linked to identifying appropriate risk management measures, not merely demonstrating technical performance.
- Criteria-based — against pre-defined metrics and probabilistic thresholds, not ad hoc benchmarks chosen after the fact.
- Continuous — performed throughout development and prior to market release, not only at the end.
- Consistency-oriented — results must demonstrate reliable behavior across the range of conditions the system will encounter, not just accuracy on an average dataset.
For detailed guidance on testing methodologies, including bias testing, see our AI bias testing and fairness guide.
Risk management for specific populations (Article 9(8)–(9))
Article 9(8) requires that when testing high-risk AI systems, specific consideration be given to whether the system is likely to be used by or affect children or other vulnerable groups. Article 9(9) extends the obligation to the development process itself, requiring that the risk management system take into account the specific needs and characteristics of intended users, including their technical knowledge, experience, and potential vulnerabilities.
This has practical consequences:
- AI systems used in education must evaluate risks specific to children's cognitive development, susceptibility to manipulation, and data protection (see our AI Act education compliance guide).
- AI systems used in healthcare must evaluate risks specific to patients, including vulnerable populations such as the elderly, persons with disabilities, and those with limited health literacy.
- AI systems used in employment must evaluate risks to groups that may face structural disadvantages in hiring processes.
Testing datasets and scenarios must be designed to reflect these specific populations, not merely the "average" user.
The risk management lifecycle
Article 9 describes a continuous, iterative process. In practice, this process can be structured into five phases that align with the AI system lifecycle. The following table summarizes each phase and its relationship to the Article 9 requirements.
Phase 1 — Risk identification (pre-development)
Before writing a single line of code or selecting a training dataset, the risk identification phase establishes the risk landscape. Activities include:
- Context analysis. What is the intended purpose of the AI system? Who are the intended users? Who are the affected persons? In what environment will the system operate?
- Regulatory mapping. What sector-specific regulations apply in addition to the AI Act? (e.g., MDR for medical devices, CRD for credit scoring, GDPR for personal data). See our EU AI Act compliance checklist for a comprehensive overview.
- Hazard identification. What categories of harm could the system cause? Health and safety impacts, fundamental rights impacts (discrimination, privacy, freedom of expression), economic impacts, environmental impacts.
- Stakeholder consultation. Engage domain experts, potential users, affected communities, and legal counsel to surface risks that internal teams may not anticipate.
Phase 2 — Risk analysis and evaluation (development)
During development, identified risks are analyzed in depth:
- Likelihood estimation. How probable is it that each risk will materialize? Base this on technical analysis, empirical testing, historical data from comparable systems, and expert judgment.
- Severity estimation. If the risk materializes, how severe would the impact be? Consider both the most likely scenario and the worst-case scenario.
- Risk scoring. Combine likelihood and severity into a risk level (see the risk matrix in the next section).
- Foreseeable misuse analysis. Go beyond intended use to model how users might misuse the system. What are the incentives for misuse? What are the consequences?
Phase 3 — Risk treatment and mitigation (pre-deployment)
For each risk that exceeds the defined risk acceptance threshold, design and implement mitigation measures:
- Elimination. Can the risk be eliminated entirely by changing the system design? (e.g., removing a feature that creates an unjustifiable risk)
- Reduction. Can the risk be reduced to an acceptable level? (e.g., adding input validation, constraining outputs, implementing confidence thresholds)
- Transfer. Can the risk be transferred to an entity better equipped to manage it? (e.g., requiring human review for decisions above a certain confidence threshold — linking directly to human oversight requirements under Article 14)
- Monitoring. For risks that cannot be eliminated or sufficiently reduced, can continuous monitoring detect when they begin to materialize?
Critically, Article 9(3) requires you to consider interaction effects between mitigation measures. Adding a confidence threshold may reduce false positives but increase false negatives. Adding human review may reduce automated errors but introduce human bias. Document these trade-offs.
Phase 4 — Residual risk assessment (deployment)
Before deployment, assess the residual risk — the risk that remains after all mitigation measures are in place:
- Individual residual risk. For each identified hazard, what is the remaining risk level after mitigation?
- Overall residual risk. Considering all hazards together, including interaction effects, what is the total residual risk profile of the system?
- Acceptability judgment. Is the residual risk acceptable, given the benefits of the system, the state of the art, and societal expectations?
- Communication. All residual risks must be documented and communicated to deployers via the instructions for use, as required by Article 13.
Phase 5 — Post-market monitoring and feedback (operation)
Once the system is in operation, the risk management cycle continues:
- Performance monitoring. Track key performance indicators against the metrics and thresholds defined in the risk management plan.
- Incident tracking. Record and analyze incidents, near-misses, user complaints, and anomalies.
- Data analysis. Use post-market data to validate or revise risk estimates. Are risks materializing at the expected frequency and severity?
- Feedback loop. Feed findings back into Phase 1, triggering a new risk identification cycle. This is the mechanism by which Article 9(2)(c) is satisfied.
For detailed guidance on setting up post-market monitoring systems, see our post-market monitoring and incident reporting guide.
Risk identification and analysis in practice
Known risks vs foreseeable risks
Known risks are documented in academic literature, observed during testing, reported from prior versions, or identified through regulatory guidance and harmonized standards. Reasonably foreseeable risks have not yet been observed in your specific system but can be anticipated based on the system's architecture, data, and deployment context — risks that a competent professional would expect given current knowledge.
The standard is "reasonably foreseeable," not "every conceivable risk." You are not expected to predict black swan events, but you must apply the level of foresight a diligent provider would exercise.
Intended use vs reasonably foreseeable misuse
While intended use is straightforward (the documented purpose the system is designed for), reasonably foreseeable misuse requires more creative analysis. Common misuse patterns include:
- Over-reliance — treating AI recommendations as definitive decisions.
- Scope creep — using the system beyond its intended scope (e.g., applying a screening tool to a different population).
- Adversarial manipulation — crafting inputs to exploit system weaknesses.
- Circumvention of safeguards — disabling safety features for throughput or convenience.
Document both intended use and foreseeable misuse scenarios, with corresponding risks and mitigation measures for each.
Techniques for risk identification
No single technique is sufficient. Best practice combines multiple approaches:
Risk matrix — likelihood vs severity
A risk matrix provides a structured framework for scoring identified risks. While the AI Act does not mandate a specific matrix format, a 5×5 matrix is widely used:
Risks rated "High" or "Critical" must have documented mitigation measures. "Medium" risks should have mitigation measures or a justified rationale for acceptance. When assessing severity, consider all impact dimensions: health and safety (physical/psychological harm), fundamental rights (discrimination, privacy violation, due process denial), economic (financial loss, service denial), and societal (erosion of trust, systemic discrimination).
Residual risk — the most misunderstood requirement
What "acceptable" residual risk means
Article 9(4) requires that residual risk be "judged to be acceptable." This phrase causes more confusion than any other in Article 9, because the regulation does not define "acceptable" with a bright-line threshold. That is intentional — acceptability depends on context.
The judgment of acceptability involves weighing:
- The residual risk itself. What is the remaining likelihood and severity of harm after mitigation measures are applied?
- The benefits of the AI system. What positive outcomes does the system deliver to users, affected persons, and society? The greater the benefit, the more residual risk may be tolerable.
- The state of the art. What risk levels are achievable with current technology? If your system has residual risk that could be reduced with available techniques that you have not implemented, that risk is harder to justify.
- Societal expectations. What level of risk do people and regulators expect and tolerate for this type of application? Higher expectations exist for systems affecting vulnerable populations or fundamental rights.
- Available alternatives. Are there alternative approaches — including non-AI approaches — that achieve the same benefits with lower risk?
Balancing benefits vs risks
The balancing exercise is not purely mathematical — it requires documented judgment supported by evidence. For each residual risk: (1) document the post-mitigation risk level, (2) explain why it cannot be further reduced, (3) document the benefits that justify acceptance, (4) compare to the state of the art and harmonized standards, and (5) record the final acceptability judgment with rationale and decision-maker identity.
Documentation requirements
The residual risk assessment must be part of Annex IV technical documentation, including: a risk register with pre- and post-mitigation levels, descriptions of each mitigation measure and its effectiveness, residual risk statements per hazard, an overall system assessment, the acceptability judgment with rationale, and the qualifications of decision-makers.
Communication to deployers
Residual risks must be communicated to deployers through the instructions for use (Article 13), covering: the nature and level of residual risks, conditions under which risks may be elevated, recommended deployer-level measures, and use limitations. Deployers must take this information into account for their fundamental rights impact assessment and human oversight measures.
Testing requirements under Article 9
Prior to placing on market — and throughout the lifecycle
Testing under Article 9 is not a gate that you pass once. Article 9(7) specifies that testing shall be performed "at any point in time throughout the development process, and, in any event, prior to their being placed on the market or put into service." This means:
- During development: Incremental testing as the system is built, trained, and refined.
- Prior to release: Comprehensive validation testing against the full set of predefined metrics and thresholds.
- After deployment: Ongoing testing triggered by system updates, changes in the operating environment, or findings from post-market monitoring.
Against preliminary defined metrics
Article 9(6) requires testing "against prior defined metrics and probabilistic thresholds." You cannot run tests and select flattering metrics after the fact — metrics and pass/fail thresholds must be defined in the risk management plan before testing begins. "Accuracy greater than 90%" is a threshold; "good accuracy" is not.
Common metrics include:
For a deep dive into fairness metrics and bias testing, see our AI bias testing and fairness guide.
Real-world conditions testing
Laboratory testing on curated datasets is necessary but not sufficient. Real-world conditions testing should include diverse input data reflecting actual populations, edge cases identified during risk identification, operational stresses (high volume, noisy data), and user behavior variability across expert and novice users.
Specific testing for children and vulnerable groups
When a system is likely to affect children or vulnerable groups, Article 9(8) requires dedicated test datasets representing those populations, specific performance metrics measured separately (not only in aggregate), scenario-based testing simulating how vulnerable users interact with the system, and impact assessment recognizing that the same error may have more severe consequences for a child than for an adult.
Mapping Article 9 to ISO 42001 and NIST AI RMF
Organizations pursuing ISO/IEC 42001 or the NIST AI Risk Management Framework will find substantial overlap with Article 9. Building on existing frameworks reduces effort and avoids gaps. See our framework crosswalk for a comprehensive mapping.
ISO 42001 Clause 6.1 mapping
ISO/IEC 42001 (AI Management System Standard) addresses risk through Clause 6.1 — Actions to address risks and opportunities. The mapping:
Key gap: ISO 42001 does not explicitly require testing against pre-defined metrics and probabilistic thresholds as specifically as Article 9(6) does. Organizations relying solely on ISO 42001 must supplement their testing framework to meet Article 9's prescriptive requirements.
For a complete guide to implementing ISO 42001, see our ISO 42001 certification guide.
NIST AI RMF GOVERN and MAP functions
The NIST AI Risk Management Framework organizes risk management into four functions: GOVERN, MAP, MEASURE, and MANAGE. The mapping to Article 9:
Key gap: The NIST AI RMF is a voluntary framework and does not mandate specific documentation requirements. Organizations using NIST AI RMF as a baseline must ensure that their documentation meets the prescriptive requirements of Article 9 and Annex IV.
Combined approach efficiency
For organizations operating in both EU and US markets: use ISO 42001 as the management system backbone (Plan-Do-Check-Act governance), NIST AI RMF for risk identification methodology (MAP function subcategories), and layer Article 9 specifics on top (pre-defined metrics with probabilistic thresholds, explicit residual risk acceptability judgments, vulnerable population testing, and Annex IV documentation). This avoids duplication while ensuring compliance with all three frameworks. For a detailed mapping, see our framework crosswalk guide.
Building your risk management system — step by step
The following seven steps provide a practical roadmap for Article 9 compliance. Read in conjunction with our AI governance framework guide.
Step 1 — Define scope and context
Before analyzing risks, define the boundaries of the risk management system:
- AI system description. What does the system do? What are its inputs, outputs, and intended purpose?
- Intended users. Who will operate the system? What is their training level and domain expertise?
- Affected persons. Who is impacted by the system's outputs? Are any affected persons vulnerable?
- Deployment context. In what environment, geography, and regulatory context will the system operate?
- System boundaries. Where does the AI system end and its operating environment begin? What upstream and downstream systems interact with it?
Document this scope in a Risk Management Plan — the overarching document that governs all subsequent risk management activities.
Step 2 — Establish risk criteria and thresholds
Define the criteria that will be used to evaluate and classify risks:
- Risk classification scheme. Define risk levels (e.g., Low, Medium, High, Critical) and the criteria for each level.
- Likelihood scale. Define what "rare," "unlikely," "possible," "likely," and "almost certain" mean in the context of your system, with quantitative ranges where feasible.
- Severity scale. Define severity levels for each category of impact (health, safety, fundamental rights, economic).
- Acceptance thresholds. Define the risk level at or above which mitigation is mandatory, and the level at which a system must not be deployed.
- Metric thresholds. Define the performance, fairness, and robustness metrics and their pass/fail thresholds, as required by Article 9(6).
Step 3 — Conduct risk identification workshop
Convene a cross-functional team — data scientists, engineers, domain experts, legal counsel, ethicists, and user representatives — to systematically identify risks using FMEA, threat modeling, scenario testing, and literature review. The output is a risk register documenting each identified risk with its description, source, category, and initial assessment. Structured workshops should cover each system component's failure modes, intended use and misuse scenarios, and population-specific risks for vulnerable groups.
Step 4 — Analyze and evaluate risks
For each risk in the register:
- Estimate likelihood using available evidence (technical analysis, testing data, literature, expert judgment).
- Estimate severity for each impact category.
- Apply the risk matrix to determine the risk level.
- Prioritize risks for treatment based on the risk level and the risk acceptance thresholds defined in Step 2.
Document the analysis methodology, data sources, assumptions, and uncertainties for each risk. This documentation is essential for Annex IV technical documentation and for defending your risk analysis to regulators.
Step 5 — Design mitigation measures
For each risk that exceeds the acceptance threshold:
- Identify candidate measures. Brainstorm multiple potential mitigation measures.
- Evaluate effectiveness. For each candidate measure, estimate how much it will reduce the risk.
- Evaluate feasibility. Consider technical feasibility, cost, impact on system performance, and interaction with other measures.
- Select measures. Choose the combination of measures that most effectively reduces risk to an acceptable level, considering interaction effects.
- Define implementation requirements. For each measure, specify what must be built, configured, documented, or communicated.
Step 6 — Implement and document
Build measures into the system (input validation, output constraints, confidence thresholds, bias mitigation, human review triggers) and establish supporting operational processes (oversight procedures, incident reporting, review cycles). Update the risk management plan, risk register, technical documentation, and instructions for use. Documentation must be sufficient for a notified body or market surveillance authority to evaluate adequacy.
Step 7 — Monitor and iterate
After deployment, implement telemetry, logging, user feedback, and incident reporting. Regularly analyze data against pre-defined metrics and thresholds. Define triggers for ad hoc reviews (severity threshold breaches, metric failures, system updates, regulatory changes). Update the risk register with newly identified risks and revised estimates, feeding findings back into Step 3 to start a new iteration.
Practical checklist
Use this checklist to verify completeness of your risk management system:
- Risk management plan documented with scope, governance, methodology, and review cycle
- Risk criteria and acceptance thresholds defined before risk analysis begins
- Known risks identified from literature, testing, and prior versions
- Foreseeable risks identified through structured analysis
- Intended use scenarios analyzed for risks
- Foreseeable misuse scenarios analyzed for risks
- Risk matrix applied with documented likelihood and severity estimates
- Mitigation measures designed, evaluated, and implemented for all risks above acceptance threshold
- Interaction effects between mitigation measures evaluated
- Residual risk assessed for each hazard and overall system
- Residual risk acceptability judged and documented with rationale
- Testing conducted against pre-defined metrics and probabilistic thresholds
- Testing conducted under real-world or simulated real-world conditions
- Vulnerable population impacts specifically tested where applicable
- Residual risks communicated to deployers in instructions for use
- Post-market monitoring system established and feeding into risk management
- Review and update cycle defined and operational
- All activities documented to Annex IV requirements
Frequently asked questions
How does Article 9 differ from a traditional risk assessment?
A traditional risk assessment is a point-in-time exercise. Article 9 requires a continuous, iterative process spanning the entire AI lifecycle — maintained and updated as the system evolves, new risks emerge from post-market data, and the operating environment changes.
Does Article 9 apply to general-purpose AI (GPAI) models?
Article 9 applies to high-risk AI systems (Articles 6–7), not GPAI models directly (governed by Chapter V). However, if a GPAI model is integrated into a high-risk system, the provider of that system must cover GPAI-introduced risks under Article 9. See our GPAI obligations guide.
What happens if residual risk is deemed unacceptable?
The system must not be placed on the market. You may need to redesign the system, limit its scope, add safeguards, or abandon the product. Notified bodies will evaluate whether the residual risk judgment is defensible.
How often must the risk management system be updated?
Article 9(1) requires "regular systematic updating" without specifying a frequency. Best practice: define a review cycle (annually or semi-annually) with ad hoc reviews triggered by system updates, incidents, regulatory changes, environmental shifts, or post-market monitoring findings.
Can I use ISO 14971 (medical devices) as the basis?
Yes, with supplements. ISO 14971's lifecycle approach and benefit-risk analysis closely mirror Article 9, but it does not address AI-specific risks like bias, distribution shift, or fundamental rights impacts beyond health and safety. See our healthcare AI compliance guide.
What is the penalty for non-compliance with Article 9?
Fines of up to EUR 15 million or 3% of global annual turnover, plus potential withdrawal or recall of non-compliant systems from the EU market. See our penalties guide.
Next steps
Building an Article 9-compliant risk management system is substantial but does not have to start from scratch — risk registers, testing frameworks, and documentation processes can be adapted from existing quality and risk management systems.
Start here:
- Determine if your system is high-risk. Use our AI Act Assessment tool to classify your AI systems and identify applicable obligations.
- Map your current state. Compare your existing risk management practices against the checklist above and identify gaps.
- Build your governance framework. Establish the organizational structures, roles, and processes described in our AI governance framework guide.
- Align with standards. Leverage ISO 42001 and NIST AI RMF to accelerate implementation. Our framework crosswalk shows you exactly how they map.
- Document continuously. Use our Annex IV technical documentation template to ensure your risk management documentation meets regulatory requirements.
The 2 August 2026 deadline is approaching. The time to build your risk management system is now.
Prüfen Sie die Compliance Ihres KI-Systems
Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.
Kostenlose Bewertung starten


