All articles
AI Act for Financial Services Compliance
AI Act

AI Act for Financial Services Compliance

EU AI Act compliance guide for banking, insurance, and fintech. Credit scoring, insurance pricing, fraud detection, and high-risk AI obligations.

Legalithm Team27 min read
Share

AI Act for Financial Services: Banking, Insurance, and Fintech Compliance Guide

Financial services is one of the sectors most heavily affected by the EU AI Act — and one of the least prepared. Banks, insurers, investment firms, payment processors, and fintechs use AI systems that sit squarely within the regulation's high-risk classification: credit scoring, creditworthiness assessment, insurance pricing, claims processing, anti-money laundering, and customer risk profiling. These are not edge cases. They are listed explicitly in Annex III, point 5(b) of the AI Act as AI systems that determine access to essential private services and benefits.

The compliance burden is significant. By August 2, 2026, every high-risk AI system placed on the EU market must satisfy a comprehensive set of requirements: risk management systems, data governance programmes, technical documentation, human oversight mechanisms, accuracy and robustness standards, and — for deployers that are financial institutions — a fundamental rights impact assessment. Financial institutions also face a unique challenge that no other sector shares to the same degree: the AI Act does not operate in isolation. It layers on top of a dense existing regulatory environment — DORA, MiFID II, Solvency II, PSD2, the Consumer Credit Directive, GDPR — creating overlapping and sometimes conflicting compliance demands.

This guide breaks down every AI Act obligation relevant to financial services, explains the interaction with existing financial regulation, and provides an actionable compliance roadmap. If you have not yet classified your AI systems, start with the high-risk classification guide and then return here.

TL;DR — AI Act financial services essentials

  • Credit scoring and creditworthiness AI are explicitly high-risk under Annex III, point 5(b) — the single most impactful classification for retail banking.
  • Insurance pricing AI for life and health insurance falls within scope as an AI system that evaluates access to essential private services. Property and casualty lines face a less certain but still plausible classification.
  • Fraud detection is generally not high-risk when used solely to detect financial fraud, but becomes high-risk when its output determines whether a customer is denied a financial service.
  • The August 2, 2026 deadline applies to all new high-risk AI systems. Systems already in operation face a transitional provision — but only if they undergo no significant modification.
  • Financial institutions are both deployers and sometimes providers of AI systems. Each role carries distinct obligations. See the provider vs deployer guide.
  • The AI Act interacts with DORA, MiFID II, Solvency II, and GDPR — creating overlapping but non-identical requirements for documentation, testing, governance, and human oversight.
  • Penalties reach up to EUR 15 million or 3% of global annual turnover for high-risk compliance failures, and EUR 35 million or 7% for prohibited practices. See the full penalties guide.
  • Conformity assessment for financial AI is self-assessment in most cases — no notified body required — but must be rigorous, documented, and auditable. See the conformity assessment guide.
  • Use the Legalithm AI Act Assessment to classify your financial AI systems and identify applicable obligations.

Why financial services face heavy AI Act obligations

Financial AI systems affect people's livelihoods at a fundamental level. A credit scoring algorithm that rejects a mortgage application determines whether someone can buy a home. An insurance pricing model that loads premiums based on health predictions determines whether someone can afford coverage. An anti-money laundering system that flags an account for investigation can lead to asset freezing and account closure. The European legislators identified these impacts as carrying a high risk to fundamental rights — specifically the right to non-discrimination, the right to an effective remedy, the right to property, and the right to social security.

Financial AI as Annex III high-risk

The AI Act's high-risk classification in Article 6(2) applies to AI systems falling within use cases listed in Annex III. For financial services, the critical provision is Annex III, point 5(b):

AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.

Additionally, Annex III, point 5(a) covers AI systems used to evaluate eligibility for essential private services and benefits, which captures insurance pricing for life and health products, as well as certain wealth management and investment advisory tools.

The classification does not depend on the sophistication of the model. A logistic regression model that produces a credit score is subject to the same requirements as a deep neural network. The trigger is the function — evaluating creditworthiness or establishing a credit score — not the underlying technology.

The August 2026 deadline

The AI Act's phased enforcement timeline means different obligations take effect at different dates. For high-risk AI systems in financial services:

DateMilestone
1 August 2024AI Act enters into force. Begin gap analysis and AI inventory.
2 February 2025Prohibited AI practices enforceable. Certain manipulative AI uses in financial marketing become unlawful.
2 August 2025GPAI model obligations apply. Foundation models underlying financial AI tools must comply with transparency requirements.
2 August 2026High-risk obligations fully enforceable. All new high-risk financial AI systems must comply with Chapter III requirements.
2 August 2027Existing high-risk systems must comply unless they undergo no significant modification and remain unchanged in their purpose and design.

Financial institutions that have not started compliance planning are already behind schedule. The conformity assessment, technical documentation, risk management system, and data governance programme required under the AI Act cannot be implemented in weeks — they require months of systematic work. See the full EU AI Act timeline for additional milestones.

High-risk AI systems in finance: detailed breakdown

Not every AI system used by a bank or insurer is high-risk. The classification is function-specific. Understanding the boundary is essential for accurate scoping and avoiding both under-compliance and over-compliance.

Credit scoring and creditworthiness assessment

Classification: High-risk (Annex III, point 5(b))

This is the clearest and most impactful high-risk classification in financial services. Any AI system whose purpose is to evaluate the creditworthiness of natural persons or to establish their credit score is high-risk. The scope is broad:

  • Traditional credit scoring models used by credit bureaux (e.g. models producing scores for consumer lending decisions).
  • Internal bank models that assess whether to approve a loan, mortgage, overdraft, or credit card application.
  • Alternative credit scoring systems used by fintechs — models incorporating non-traditional data such as utility payments, rental history, mobile phone usage, social media signals, or spending behaviour.
  • Affordability assessment models used to determine whether a consumer can service a debt obligation.
  • Pre-qualification engines that determine which consumers see loan offers — because the AI system effectively evaluates creditworthiness to decide whether to present a product.
  • Dynamic credit limit adjustment algorithms that modify existing credit lines based on ongoing creditworthiness evaluation.

Real-world example: A digital lender uses a machine learning model incorporating 500+ features — including device type, browsing behaviour, time of application, and transaction history — to produce a credit risk score. That model is high-risk regardless of whether the lender calls it a "credit score" or a "risk assessment" or a "decision engine." The function — evaluating creditworthiness — determines the classification.

What this means in practice: Every retail bank, consumer finance company, credit card issuer, mortgage lender, and fintech lender in the EU market must ensure their credit scoring AI satisfies the full Chapter III requirements by August 2026. For institutions using vendor-provided scoring models, both the vendor (as provider) and the institution (as deployer) carry compliance obligations. See the provider vs deployer guide for how these responsibilities divide.

Insurance pricing and risk assessment (life and health)

Classification: High-risk (Annex III, point 5(a))

AI systems used to determine insurance premiums, coverage terms, or eligibility for life and health insurance products fall within the high-risk classification. The logic is that these systems determine access to an essential private service — health and life protection — and their outputs can have a direct, material impact on individuals' financial wellbeing and access to healthcare.

Specifically captured:

  • Underwriting models for life and health insurance that use AI to assess risk and set premiums.
  • Risk classification systems that categorise applicants into risk tiers affecting premium levels.
  • Claims prediction models used during underwriting to estimate future claim likelihood and inform pricing.
  • Health risk scoring systems that evaluate applicants' health profiles using medical data, wearable data, or lifestyle data.
  • Dynamic pricing algorithms that adjust health or life insurance premiums based on ongoing behavioural or health data.

Property and casualty insurance — the grey area: The classification of P&C insurance AI is less settled. Motor, home, and commercial insurance pricing models are not explicitly mentioned in Annex III, point 5 in the same direct way that creditworthiness and health/life insurance are. However, where an AI system determines whether a consumer can access motor insurance — a legal requirement in most EU Member States — a strong argument exists that it evaluates access to an essential service. The European AI Office's guidance and Member State enforcement practices will clarify this boundary. Prudent insurers should conduct an AI Act risk assessment on all underwriting AI systems, not only life and health.

Real-world example: A life insurer deploys an AI model that ingests applicant health questionnaires, medical records, and wearable device data to predict mortality risk and set premiums. This model is high-risk under the AI Act. It must comply with data governance requirements — including examining training data for biases related to gender, ethnicity, age, disability, and socioeconomic status — and must provide for meaningful human oversight of pricing decisions.

Fraud detection — the exception that matters

Classification: Generally NOT high-risk

Annex III, point 5(b) contains an explicit carve-out:

"...with the exception of AI systems used for the purpose of detecting financial fraud."

This means that AI systems whose sole purpose is fraud detection — transaction monitoring, suspicious activity identification, anomaly detection in payment flows — are not high-risk under the AI Act's Annex III classification. The rationale is that fraud detection systems protect consumers rather than gatekeep their access to services.

But the exception is narrower than it appears. Critical nuances:

  1. The exception applies only to fraud detection, not to the downstream consequences. If a fraud detection system's output triggers an automatic account suspension, a transaction block, or a service denial, the AI system that makes that downstream decision may itself be high-risk — because it now determines access to an essential financial service.

  2. Anti-money laundering (AML) and counter-terrorism financing (CTF) systems occupy an ambiguous position. These systems are not solely fraud detection — they assess customer risk profiles for regulatory compliance purposes. Where AML/CTF AI results in account closures, de-risking, or refusal of service, the system's output affects access to essential banking services. Financial institutions should assess these systems individually rather than assuming the fraud detection exemption applies universally.

  3. Combined systems that perform both fraud detection and credit risk assessment — a common architecture in fintechs — cannot rely on the fraud detection exemption for the credit risk component. If the system has a dual purpose, the high-risk classification applies to the creditworthiness function.

Real-world example: A payment processor uses an AI system to flag potentially fraudulent transactions in real-time. The system's sole output is a fraud probability score sent to a human analyst for review. This system benefits from the fraud detection exemption and is not high-risk. However, if the same processor deploys a system that automatically blocks customer accounts based on algorithmic risk scoring — and the blocked customers cannot access their funds — the account-blocking function falls outside the exemption.

Recruitment AI in financial institutions

Classification: High-risk (Annex III, point 4)

Financial institutions using AI in hiring — CV screening, candidate ranking, interview scoring, performance monitoring, promotion recommendations — face high-risk obligations under the separate employment and recruitment classification. This applies regardless of whether the institution is a bank, insurer, or fintech. For a detailed analysis of recruitment AI obligations, see the AI Act HR and recruitment compliance guide.

Compliance requirements for financial AI systems

Every high-risk AI system must satisfy the requirements set out in Chapter III, Section 2 of the AI Act. For financial institutions, these requirements must be understood in the context of existing regulatory frameworks that already impose related — but not identical — obligations.

Risk management system (Article 9)

Article 9 requires a continuous, iterative risk management system throughout the AI system's lifecycle. For financial AI, this means:

  • Identification and analysis of known and reasonably foreseeable risks to health, safety, and fundamental rights. For credit scoring, this includes the risk of discriminatory outcomes based on protected characteristics — race, gender, age, disability, ethnicity. For insurance pricing, this includes risks of indirect discrimination through proxy variables.
  • Estimation and evaluation of risks arising from intended use and reasonably foreseeable misuse. A creditworthiness model designed for personal loan decisions that is repurposed for mortgage underwriting without re-validation is a foreseeable misuse scenario.
  • Adoption of risk management measures — residual risk must be judged acceptable, considering the state of the art. For credit scoring, this may require demonstrating that the model does not produce statistically significant disparate outcomes across protected groups, or that any disparate outcomes are justified and proportionate.
  • Testing against prior defined metrics and probabilistic thresholds appropriate to the intended purpose. Credit scoring models must be tested for predictive accuracy, stability, and fairness across demographic subgroups.

Financial institutions already operating under EBA guidelines on IRB models, ECB TRIM requirements, or Solvency II model validation standards will find overlap — but the AI Act's risk management requirements are broader in scope, particularly around fundamental rights and bias. The AI Act requires explicit assessment of risks to non-discrimination, privacy, dignity, and effective remedy — dimensions that existing model risk management frameworks in banking may not systematically address.

Data governance (Article 10)

Article 10 imposes requirements on training, validation, and testing datasets. For financial AI, this is the requirement with the greatest practical impact:

  • Training data must be relevant, sufficiently representative, and as free of errors as possible. For credit scoring, this means the training dataset must represent the population to which the model will be applied — not just the historically approved population (a well-known source of selection bias in lending).
  • Examination for possible biases is mandatory. Financial institutions must examine whether their training data reflects historical discrimination — such as historic lending patterns that systematically disadvantaged certain ethnic groups, women, or low-income communities — and take steps to address identified biases.
  • Data governance practices must include documentation of data sources, collection methods, data preparation operations, assumptions, and limitations.
  • Personal data processing must comply with GDPR, including establishing a lawful basis for processing sensitive data used in bias detection. See the AI Act vs GDPR guide for how these obligations interact.

The challenge for financial institutions is acute. Credit scoring inherently depends on historical data, and historical lending data reflects decades of discriminatory practices. Compliance with Article 10 requires more than documenting datasets — it requires actively identifying and mitigating embedded biases. See the bias testing and fairness guide for practical approaches.

Technical documentation (Article 11)

Article 11 and Annex IV require comprehensive technical documentation that covers:

  • A general description of the AI system, its intended purpose, and the provider.
  • A detailed description of the elements of the AI system and its development process — including design specifications, system architecture, algorithms used, data requirements, and training methodology.
  • Information on the monitoring, functioning, and control of the AI system, including human oversight measures.
  • A description of the system's accuracy, robustness, and cybersecurity, including metrics and test results.
  • A description of the risk management system.
  • A description of changes made throughout the system's lifecycle.

For financial institutions accustomed to model documentation under supervisory expectations (e.g. ECB's Guide to Internal Models, PRA's model risk management expectations), much of this will feel familiar. However, Annex IV documentation requirements are more granular in specific areas — particularly around data governance, bias assessment, and fundamental rights impact — than typical model documentation.

Human oversight (Article 14)

Article 14 requires that high-risk AI systems are designed to be effectively overseen by natural persons. For financial AI:

  • Human overseers must be able to fully understand the capabilities and limitations of the system, including its accuracy rates, error rates, and known biases.
  • Overseers must be able to correctly interpret the system's output, taking into account the characteristics of the system and available interpretability tools.
  • Overseers must be able to decide not to use the system or to disregard, override, or reverse the output.
  • For credit scoring, this means loan officers must have the ability and practical capacity to override AI-driven credit decisions — not merely a theoretical override button that is never used. Supervisory authorities will likely examine override rates as evidence of meaningful human oversight.

Practical tension with automation: Many financial institutions have moved toward straight-through processing (STP) for routine lending decisions — automated approve/decline with no human intervention for applications within certain parameters. Under the AI Act, STP for high-risk AI decisions requires careful design. The system must be configured so that human oversight is available for cases that the system identifies as requiring it (e.g. near-threshold decisions, applications with unusual characteristics), and the institution must demonstrate that the human oversight mechanism is effective, not merely nominal.

Accuracy, robustness, and cybersecurity (Article 15)

Article 15 requires high-risk AI systems to achieve appropriate levels of:

  • Accuracy: The system must perform consistently with its stated purpose. For credit scoring, accuracy metrics must be documented, including performance across different demographic subgroups — aggregate accuracy is insufficient if the model performs significantly worse for specific populations.
  • Robustness: The system must be resilient to errors, faults, and inconsistencies — including adversarial attempts. Credit scoring models must be tested against adversarial inputs (e.g. synthetic identity fraud, data manipulation to inflate scores).
  • Cybersecurity: The system must be protected against unauthorized access, data corruption, and model manipulation. This intersects directly with DORA requirements (see below).

Interaction with existing financial regulation

The AI Act does not replace existing financial regulation — it adds to it. Financial institutions must navigate the intersection of the AI Act with multiple existing regulatory frameworks. In several areas, existing regulation already imposes requirements that partially address AI Act obligations; in others, the AI Act creates genuinely new demands.

DORA (Digital Operational Resilience Act)

The Digital Operational Resilience Act (Regulation (EU) 2022/2554), fully applicable from January 17, 2025, requires financial entities to establish comprehensive ICT risk management frameworks, conduct operational resilience testing, manage ICT third-party risk, and report ICT-related incidents.

Overlap with the AI Act:

  • DORA's ICT risk management framework (Articles 5–16) covers aspects of AI system risk management, particularly around operational resilience, change management, and incident handling.
  • DORA's third-party risk management requirements (Articles 28–44) apply to outsourced AI services — cloud-hosted credit scoring models, vendor-provided fraud detection, BaaS AI components — and partially address the AI Act's requirements for supply chain governance.
  • DORA's operational resilience testing (Articles 24–27) overlaps with Article 15's robustness and cybersecurity requirements.

Where they diverge: DORA is focused on operational resilience — ensuring financial institutions can withstand and recover from ICT disruptions. It does not address fundamental rights, bias, discrimination, transparency to affected individuals, or the specific data governance requirements of Article 10. An AI system can be fully DORA-compliant and still fall short of the AI Act.

MiFID II

The Markets in Financial Instruments Directive II (Directive 2014/65/EU) governs investment services and activities. Where AI is used for investment advice, portfolio management, suitability assessments, or algorithmic trading:

  • MiFID II's suitability assessment requirements (Article 25) already require firms to ensure investment recommendations are suitable for the client. AI-driven robo-advisory platforms must comply with both MiFID II suitability requirements and AI Act transparency and human oversight requirements.
  • Algorithmic trading obligations under MiFID II (Article 17) require risk controls, testing, and documentation for trading algorithms. These overlap with the AI Act's risk management and accuracy requirements, but the AI Act additionally requires assessment of fundamental rights impact and bias — concepts not addressed by MiFID II.
  • MiFID II's best execution obligations require firms to take sufficient steps to obtain the best possible result for clients. Where AI is used to determine execution venues or strategies, the AI Act's transparency requirements may impose additional disclosure obligations.

Solvency II

The Solvency II Directive (Directive 2009/138/EC) governs insurance and reinsurance undertakings. Its relevance to AI compliance:

  • Solvency II's own risk and solvency assessment (ORSA) requirements already demand that insurers understand and document the risks arising from their internal models — including pricing models.
  • The actuarial function requirements (Article 48) impose governance and validation obligations on pricing and reserving models that partially overlap with the AI Act's technical documentation and risk management requirements.
  • Solvency II's governance system requirements (Articles 41–49) — including fit and proper requirements, risk management, and internal controls — provide a foundation for AI governance, but do not address bias testing, fundamental rights assessment, or the specific human oversight requirements of Article 14.

GDPR

The General Data Protection Regulation overlaps most significantly with the AI Act for financial services. See the detailed AI Act vs GDPR comparison. Key intersections:

  • Automated individual decision-making (Article 22 GDPR): Credit scoring is the canonical example of automated decision-making with legal or similarly significant effects. GDPR Article 22 already requires meaningful information about the logic involved, the right to human intervention, and the right to contest. The AI Act's human oversight and transparency requirements extend these obligations.
  • Data protection impact assessment (DPIA): A DPIA under GDPR Article 35 is likely required for most high-risk financial AI systems. The AI Act's fundamental rights impact assessment (FRIA) is a separate obligation for certain deployers — including financial institutions — but can be coordinated with the DPIA.
  • Lawful basis for bias testing: Examining training data for bias requires processing data about protected characteristics (race, ethnicity, health status, gender). Under GDPR, this requires a lawful basis — typically Article 9(2)(g) (substantial public interest) implemented through Member State law. This is an unresolved tension that the European Data Protection Board and the AI Office are expected to address in guidance.
  • Right to explanation: GDPR provides a right to meaningful information about automated decisions. The AI Act requires that deployers inform affected persons that they are subject to a high-risk AI system (Article 26(11)). These obligations are complementary but have different scopes and triggers.

Comparative obligations across frameworks

ObligationAI ActDORAMiFID IISolvency IIGDPR
Risk management systemArticle 9 — full lifecycle, including fundamental rightsArticles 5–16 — ICT risk focusArticle 16 — operational riskArticles 44–45 — risk management functionArticle 32 — security of processing
Technical documentationArticle 11 + Annex IV — granular AI-specificArticle 6 — ICT systems documentationOrganisational requirementsActuarial function documentationArticle 30 — records of processing
Human oversightArticle 14 — mandatory for high-risk AINot specifically addressedArticle 25 — suitability assessment processArticle 48 — actuarial function governanceArticle 22 — right to human intervention
Bias / fairness testingArticle 10 — mandatory examination for biasesNot addressedNot addressedNot addressedImplicit in non-discrimination principles
Incident reportingArticle 62 — serious incidentsArticles 17–23 — ICT incidentsTransaction reportingSupervisory reportingArticles 33–34 — data breach notification
Third-party oversightArticle 25 — authorised representatives; supply chainArticles 28–44 — ICT third-party riskOutsourcing guidelinesOutsourcing guidelinesArticle 28 — processor obligations
Penalties (max)EUR 15M / 3% turnover (high-risk); EUR 35M / 7% (prohibited)EUR 1% of average daily global turnover (entities); EUR 500K–1M (individuals)National transposition — variesSupervisory measuresEUR 20M / 4% global turnover
Fundamental rights impactFRIA required for certain deployersNot addressedNot addressedNot addressedDPIA (related but distinct)

Conformity assessment for financial AI

High-risk AI systems in financial services are subject to conformity assessment under Article 43. For financial AI that is not a safety component of a product covered by Annex I legislation, the conformity assessment is based on internal control — meaning the provider (typically the vendor or, for internally developed systems, the financial institution itself) conducts the assessment without involving a notified body.

This is both an advantage and a responsibility:

  • No external auditor is required. Financial institutions developing AI systems in-house or procuring from vendors can conduct conformity assessments internally, following the procedure set out in Annex VI.
  • The assessment must be rigorous and documented. Self-assessment does not mean light-touch compliance. The provider must verify compliance with every applicable requirement — risk management, data governance, technical documentation, human oversight, accuracy, robustness — and create a declaration of conformity under Article 47.
  • The documentation must withstand regulatory scrutiny. National market surveillance authorities can request full technical documentation and audit the conformity assessment. Financial institutions subject to prudential supervision (by ECB, national competent authorities, or national supervisory authorities) may also face questions about AI Act compliance through supervisory channels.
  • CE marking is required for high-risk AI systems placed on the market after the compliance deadline. The CE marking signifies that the system conforms to AI Act requirements.

For a full walkthrough of the conformity assessment process, see the conformity assessment guide.

Special case — financial AI using biometric identification: If a financial institution uses real-time remote biometric identification (e.g. facial recognition for customer onboarding), the conformity assessment requires involvement of a notified body under Annex VII. This is the only scenario in financial services where third-party conformity assessment is mandated.

Penalties and enforcement

The AI Act establishes a tiered penalty framework that applies to financial institutions as both providers and deployers of AI systems:

ViolationMaximum penalty
Prohibited AI practices (Article 5)EUR 35 million or 7% of global annual turnover, whichever is higher
High-risk AI non-compliance (Chapter III requirements)EUR 15 million or 3% of global annual turnover, whichever is higher
Incorrect information to authoritiesEUR 7.5 million or 1% of global annual turnover, whichever is higher

For major financial institutions, the turnover-based calculation produces dramatically larger figures. A global bank with EUR 50 billion in revenue faces a theoretical maximum penalty of EUR 3.5 billion (7% for prohibited practices) or EUR 1.5 billion (3% for high-risk non-compliance).

Enforcement architecture: The AI Act is enforced at the national level by market surveillance authorities designated by each Member State. For financial services, Member States may designate existing financial supervisory authorities (central banks, financial conduct authorities) as the competent authority for AI Act enforcement in the financial sector. This means the same supervisory body that conducts prudential inspections may also enforce AI Act compliance — integrating AI governance into the broader supervisory framework.

Interaction with financial regulation penalties: AI Act penalties are in addition to penalties available under other regulatory frameworks. A credit scoring model that violates both the AI Act (by failing to conduct bias testing) and GDPR (by lacking a lawful basis for processing sensitive data) can attract penalties under both regulations. There is no double-jeopardy protection between these frameworks.

For the complete penalties framework, see the penalties and fines guide.

Implementation roadmap for financial institutions

Compliance cannot be achieved in a single sprint. The following timeline reflects realistic planning horizons for financial institutions of varying sizes.

PhaseTimelineActions
1. AI inventory and classificationNow – Q2 2026Conduct a comprehensive AI systems inventory. Identify all AI systems in use, classify each as prohibited, high-risk, limited-risk, or minimal-risk. Map provider vs deployer responsibilities. Use the AI Act assessment tool for classification.
2. Gap analysisQ2 2026For each high-risk system: compare current documentation, governance, and testing against AI Act requirements. Identify gaps in risk management, data governance, bias testing, human oversight, and technical documentation. Cross-reference with existing DORA, MiFID II, Solvency II, and GDPR compliance.
3. Risk management systemQ2 – Q3 2026Establish or extend the risk management framework to cover fundamental rights, bias risks, and foreseeable misuse. Integrate AI risk management into existing model risk management (MRM) frameworks where possible.
4. Data governance programmeQ2 – Q3 2026Audit training, validation, and testing datasets. Conduct bias testing across protected characteristics. Document data sources, preparation steps, and limitations. Address the lawful basis for processing sensitive data for bias detection.
5. Technical documentationQ3 2026Prepare Annex IV documentation for each high-risk system. Use the technical documentation template as a starting point.
6. Human oversight mechanismsQ3 2026Design and implement effective human oversight for each high-risk system. Define escalation procedures, override capabilities, and monitoring dashboards. Train human overseers on system capabilities and limitations.
7. Conformity assessmentQ3 – Q4 2026Conduct internal conformity assessment under Annex VI. Prepare declaration of conformity. Apply CE marking.
8. FRIA (deployers)Q3 – Q4 2026Financial institutions deploying high-risk AI must conduct a fundamental rights impact assessment. Coordinate with the GDPR DPIA where applicable.
9. Supplier contractsQ2 – Q3 2026Review and amend contracts with AI vendors to ensure contractual allocation of provider obligations, access to technical documentation, and cooperation on conformity assessment.
10. Post-market monitoringAugust 2026 onwardsEstablish ongoing monitoring, performance tracking, incident reporting procedures, and periodic model reviews. Integrate with existing model validation cycles. See the post-market monitoring guide.
11. Compliance checklistOngoingUse the EU AI Act compliance checklist to track progress and ensure no requirement is missed.

Frequently asked questions

Is credit scoring always high-risk under the AI Act?

Yes — with one qualification. Any AI system intended to evaluate the creditworthiness of natural persons or establish their credit score is high-risk under Annex III, point 5(b). This applies regardless of the model complexity, the data inputs, or whether the system makes the final lending decision or merely provides a recommendation. The only exception is AI used solely for detecting financial fraud. If a credit scoring model has a fraud detection component, the creditworthiness function remains high-risk; the fraud detection exemption cannot be used to shield the entire system.

Does the AI Act apply to AI systems we purchase from vendors?

Yes. Financial institutions using vendor-provided AI systems (credit scoring platforms, AML solutions, insurance pricing tools) are deployers under the AI Act and carry their own compliance obligations — including human oversight, informing affected persons, conducting the FRIA (where applicable), and ensuring the system is used in accordance with its instructions for use. Vendor compliance as a provider does not discharge the deployer's independent duties. See the provider vs deployer obligations guide.

How does the AI Act interact with the existing EBA/ECB model risk management expectations?

The AI Act's requirements overlap partially with existing supervisory expectations for model risk management (MRM), model validation, and internal models. Risk management (Article 9), technical documentation (Article 11), accuracy testing (Article 15), and human oversight (Article 14) all have counterparts in prudential supervision. However, the AI Act introduces requirements that go beyond traditional MRM — specifically, mandatory bias testing under Article 10, fundamental rights impact assessment, transparency to affected individuals, and the formal conformity assessment process. Financial institutions should extend their existing MRM frameworks rather than building parallel structures.

Are anti-money laundering (AML) AI systems high-risk?

The answer depends on function and impact. AI systems used solely for fraud detection benefit from the explicit Annex III, point 5(b) exemption. However, AML systems often extend beyond detection — they produce customer risk scores, trigger enhanced due diligence, lead to account restrictions, and inform decisions to de-bank customers. Where an AML system's output materially influences a decision to deny, restrict, or terminate a customer's access to banking services, the system may qualify as high-risk under Annex III, point 5(a) as an AI system affecting access to essential private services. Financial institutions should conduct a function-by-function analysis rather than assuming blanket exemption.

What happens if our AI system is already in production before August 2026?

The AI Act includes a transitional provision for AI systems already placed on the market or put into service before August 2, 2026. These systems are not required to comply immediately — provided they undergo no significant modification to their purpose, functionality, or design. If the system remains unchanged, compliance is required by August 2, 2027. However, any significant modification — a major model retrain, a change in the target population, incorporation of new data sources — resets the clock and triggers immediate compliance requirements. Financial institutions should assess each system's modification history and planned development roadmap to determine which deadline applies.

Do we need a fundamental rights impact assessment?

Under Article 27, deployers of high-risk AI systems that are bodies governed by public law, private entities providing public services, or entities deploying AI systems listed in certain Annex III categories — including point 5 (access to essential services) — must conduct a FRIA before putting the system into use. Credit-granting institutions and insurers deploying high-risk AI in lending and insurance pricing are likely to fall within this scope. The FRIA is a separate document from the GDPR DPIA, though the two can be conducted in parallel and share factual analysis. See the FRIA guide for the full methodology.

Next steps

Financial institutions have a narrow but viable window to achieve AI Act compliance before the August 2026 deadline. The key is to start now, leverage existing regulatory frameworks where they overlap, and identify — early — the genuinely new obligations that the AI Act introduces.

Start with classification. Use the Legalithm AI Act Assessment tool to classify your AI systems, identify which are high-risk, and generate a tailored obligations report. Then work through the EU AI Act compliance checklist to systematically close gaps.

For a broader view of the regulatory landscape and how it compares across jurisdictions, see the global AI regulation comparison.

AI Act
Financial Services
Banking
Insurance
Credit Scoring
Compliance
High-Risk AI

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment