Alle Artikel
AI Systems Inventory for EU AI Act Compliance
AI Act

AI Systems Inventory for EU AI Act Compliance

How to build an AI systems inventory for EU AI Act compliance. Shadow AI discovery, required fields, prioritisation, and templates.

Legalithm Team20 Min. Lesezeit
Teilen
Lesezeit20 min
ThemaAI Act
AktualisiertDez. 2025
Inhaltsverzeichnis

How to Build an AI Systems Inventory for EU AI Act Compliance

TL;DR

  • An AI systems inventory is the structural prerequisite for every other EU AI Act compliance activity — you cannot classify risks, assign roles, document systems, or complete conformity assessment without it.
  • Article 3(1) defines AI systems broadly: any machine-based system designed to operate with varying autonomy that infers outputs like predictions, recommendations, or decisions. This includes ML models, generative AI, computer vision, NLP systems, and AI features embedded in SaaS tools.
  • Shadow AI — unapproved tools adopted by employees — is the largest inventory blind spot. Discover it through structured surveys, network traffic analysis, expense audits, IT security reviews, and direct conversations.
  • Most organisations deploy far more third-party AI (as deployers) than they build (as providers). Embedded AI in CRM, HR, finance, marketing, and development tools must all be inventoried.
  • A complete inventory entry includes core fields (name, vendor, purpose, deployment), regulatory fields (risk level, role, Annex III area, compliance status), and role-specific fields for providers and deployers.
  • Inventory is a living process, not a one-time exercise. Establish quarterly reviews and event-triggered updates.

Why inventory is the foundation of AI Act compliance

Every compliance framework starts with the same question: what do you have? Under the EU AI Act, the answer to that question is your AI systems inventory.

The reason inventory comes first is structural. Without knowing which AI systems your organisation develops, deploys, imports, or distributes, you cannot:

  • Classify risk levels under Article 6 and Annex III.
  • Assign roles (provider, deployer, importer, distributor) under Article 3, which determines your obligations.
  • Prepare technical documentation under Annex IV for the right systems.
  • Complete conformity assessment under Article 43 — you cannot assess a system you have not identified.
  • Register systems in the EU database under Article 49.
  • Conduct fundamental rights impact assessments under Article 27 for the right deployment contexts.

A compliance program built without a complete inventory is a compliance program built on assumptions. When a market surveillance authority asks "show me your AI systems and their compliance status," the inventory is the first document they expect to see.

What counts as an AI system? Article 3(1) definition in practice

Article 3(1) defines an AI system as:

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Systems that qualify

  • ML models in production: Classification, regression, recommendation, and ranking systems. A product recommendation engine, a churn prediction model, a dynamic pricing algorithm.
  • Generative AI: Large language models, image generators, code assistants, chatbot integrations. ChatGPT, Claude, Gemini, Midjourney, GitHub Copilot, internal LLM deployments.
  • Computer vision: Object detection, facial recognition, document OCR with AI-based extraction, quality inspection on manufacturing lines.
  • NLP systems: Sentiment analysis, entity extraction, automated summarisation, machine translation with neural models.
  • Decision support tools: Credit scoring, hiring screening, fraud detection, risk profiling, insurance underwriting, benefits eligibility determination.
  • Robotic process automation with AI: RPA bots that use ML for document understanding, intent classification, or adaptive routing (not traditional rule-based RPA).
  • Embedded AI in SaaS: AI features in CRM platforms, HR tools, marketing automation, customer support software, development tools, and finance platforms (detailed below).

Systems that typically do not qualify

  • Rule-based automation: RPA bots executing fixed scripts, Excel macros, IF-THEN decision trees with no learning component. A workflow that routes tickets based on keywords with static rules is not AI.
  • Simple statistical models: Linear regression without adaptive elements, basic threshold-based alerts, moving averages. A simple business rule that flags transactions over EUR 10,000 is not AI.
  • Traditional software: Database queries, deterministic algorithms, lookup tables, calculators. A tax calculator that applies statutory rates is not AI.
  • Conventional search: Keyword-based search without AI ranking. Standard SQL queries.
  • Static dashboards: BI dashboards that display metrics without generating predictions or recommendations.

Borderline cases — when in doubt, include

The boundary between "AI system" and "advanced software" is deliberately broad in the regulation. When classification is ambiguous, include the system in your inventory and classify it. It is far safer to over-include and later reclassify as minimal-risk than to omit a system and discover the gap during an audit.

Real-world example — borderline case: A logistics company uses a route optimisation tool that uses historical traffic data and heuristic algorithms but also incorporates a predictive model for traffic forecasting. The heuristic component alone would not qualify, but the predictive model component makes this an AI system. Include it.

Step 1: Discovery — find all AI systems

Method 1: Procurement and vendor records

Start with your procurement, vendor management, and IT asset management systems. Search contracts, purchase orders, and vendor agreements for keywords: AI, machine learning, neural network, predictive analytics, natural language processing, computer vision, generative, intelligent, adaptive, automated decision.

For each vendor tool identified:

  • Does the vendor describe the product as using AI or ML?
  • Does the tool generate predictions, recommendations, classifications, or content?
  • Does the tool adapt or improve based on data over time?

Real-world example — procurement audit: A mid-size insurance company reviews its 85 active SaaS contracts and finds 12 explicitly mention AI or ML capabilities. Further investigation reveals 7 additional tools with embedded AI features not mentioned in the contract but described in product documentation.

Method 2: Shadow AI discovery

Shadow AI — unapproved AI tools adopted by employees without formal IT procurement — is one of the largest compliance risks under the AI Act. Employees adopt AI tools because they are useful, often without realising the regulatory implications.

Discovery method 1: Department-wide surveys

Send structured questionnaires to every team lead and department head. Ask specifically about AI-powered tools used in daily work. Critical: many people do not think of ChatGPT, AI features in Slack, or AI-powered Excel add-ins as "AI systems."

Frame questions concretely:

  • "Does your team use any tool that generates text, summaries, or drafts?" (catches LLMs)
  • "Does your team use any tool that makes recommendations or predictions?" (catches ML tools)
  • "Does your team use any tool that analyses images, documents, or audio?" (catches vision/NLP)
  • "Have any team members subscribed to AI tools using personal accounts or corporate cards?" (catches shadow subscriptions)

Discovery method 2: Network traffic analysis

Work with your IT or security team to review outbound network connections to known AI service endpoints:

  • api.openai.com, chat.openai.com (OpenAI/ChatGPT)
  • api.anthropic.com, claude.ai (Anthropic/Claude)
  • generativelanguage.googleapis.com, gemini.google.com (Google Gemini)
  • api.mistral.ai (Mistral)
  • copilot.github.com, api.githubcopilot.com (GitHub Copilot)
  • api.midjourney.com (Midjourney)
  • endpoints for Hugging Face, Replicate, and other model hosting platforms

This method catches tools used on corporate devices and networks but misses personal devices on personal networks.

Discovery method 3: Expense and subscription audits

Review corporate credit card statements, expense reports, and reimbursement requests for AI tool subscriptions. Look for charges from OpenAI, Anthropic, Google (AI-specific), Jasper, Copy.ai, Notion AI, Grammarly Premium, Otter.ai, Descript, Runway, and similar providers.

Also check app store purchase histories on corporate devices and any shared subscription management tools (1Password Teams, shared credential vaults).

Discovery method 4: IT security and endpoint reviews

Review browser extension inventories on managed devices, desktop application installations, SaaS integrations via OAuth/SSO, and API keys stored in code repositories or credential managers. AI browser extensions (Grammarly AI, ChatGPT sidebar extensions, AI summarisers) are frequently adopted without IT approval.

Discovery method 5: Direct conversations

Surveys miss tools that people consider "just another app" or do not recognise as AI. Have compliance or team leads conduct 15-minute conversations with individual teams. Ask open-ended questions about workflow improvements, new tools adopted in the past year, and how they handle tasks like drafting, analysis, and data processing.

This method has the highest discovery rate for shadow AI that evades all technical detection methods.

Method 3: Embedded AI in existing SaaS tools

Many SaaS platforms your organisation already uses have added AI features — sometimes enabled by default, sometimes opt-in. These embedded AI capabilities may create AI Act obligations even though you did not consciously adopt an "AI tool."

Check whether these tools in your stack have AI capabilities:

CategoryToolAI feature to check
CRMSalesforceEinstein AI (lead scoring, opportunity insights, predictive forecasting)
CRMHubSpotAI content assistant, predictive lead scoring, conversation intelligence
CRMZohoZia AI (sales predictions, anomaly detection, sentiment analysis)
HRWorkdayAI-driven talent recommendations, skills inference, compensation benchmarking
HRBambooHRAI-powered insights, turnover predictions
HRLinkedIn RecruiterAI candidate matching, InMail suggestions, talent pool insights
Customer supportIntercomFin AI agent (autonomous resolution), AI summarisation, classification
Customer supportZendeskAI-powered ticket routing, answer suggestions, sentiment detection
Customer supportFreshdeskFreddy AI (auto-triage, suggested responses, bot capabilities)
MarketingMailchimpSend time optimisation, predictive segmentation, content recommendations
MarketingHubSpot MarketingAI content generation, predictive analytics, adaptive testing
MarketingJasperAI content generation (text, images, marketing copy)
FinanceStripeRadar (ML-based fraud detection), revenue recognition predictions
FinanceXeroAI-powered bank reconciliation, invoice predictions
FinanceQuickBooksAI-driven categorisation, cash flow predictions
DevelopmentGitHub CopilotAI code completion, code review suggestions
DevelopmentSnykAI-powered vulnerability detection and fix suggestions
DevelopmentDatadogAI-based anomaly detection, log pattern analysis
CollaborationMicrosoft 365 CopilotAI content generation, summarisation, data analysis across Office apps
CollaborationNotion AIAI writing, summarisation, Q&A across workspace
CollaborationSlack AIAI-powered search, channel summaries, thread digests

Contact vendors directly if their documentation is unclear. Ask: "Does your product use AI or machine learning in any feature that generates predictions, recommendations, or automated decisions? If so, which features and what data do they process?"

Step 2: Classify each system

For each AI system in your inventory, determine two things: risk level and your role.

Risk level classification

Apply the Article 6 classification logic:

  • Prohibited (Article 5): Social scoring, manipulative subliminal techniques, real-time remote biometric identification (narrow exceptions), emotion recognition in workplaces/education. Ban immediately — these practices are already unlawful since 2 February 2025.
  • High-risk (Annex III): AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice/democracy. Full compliance obligations by 2 August 2026.
  • Limited risk (Article 50): Chatbots, deepfake generators, emotion recognition, biometric categorisation. Transparency obligations apply.
  • Minimal risk: Spam filters, recommendation engines, AI-assisted game mechanics. No mandatory obligations (voluntary codes under Article 95).

See the classification guide for the detailed decision flowchart, and check for the Article 6(3) exception that may remove Annex III systems from high-risk classification if the AI does not pose a significant risk.

Role determination

For each system, determine whether you are a:

  • Provider: You developed the system, or placed it on the market or put it into service under your name or trademark.
  • Deployer: You use a system developed by someone else under your authority in your professional capacity.
  • Importer: You make a non-EU provider's system available on the EU market.
  • Distributor: You make a system available in the supply chain without being the provider, importer, or deployer.

One organisation can hold different roles for different systems. A fintech might be the provider of its proprietary credit-scoring model and the deployer of a third-party fraud detection tool.

See the provider vs deployer guide for the full side-by-side obligation comparison.

Step 3: Document each system — comprehensive inventory fields

Core identification fields

FieldDescriptionExample
System IDUnique internal identifierAI-2026-017
System nameOfficial product name or internal name"CreditScore v3.2" or "Salesforce Einstein"
Vendor / providerWho built it (external vendor or internal team)Internal / Salesforce Inc.
VersionCurrent deployed version3.2.1
DescriptionPlain-language summary of what the system does"Predicts creditworthiness of loan applicants using financial history and behavioural data"
Intended purposeThe specific task or use case per provider documentation"Automated credit scoring for consumer lending decisions"
Actual useHow the system is actually deployed (may differ from intended purpose)"Used for both consumer and SME lending decisions"
Department(s)Which teams or business units use the systemRisk, Lending Operations
Deployment dateWhen the system was first put into service2024-03-15
Deployment methodSaaS, API, on-premise, embedded in product, edgeAPI (cloud-hosted)
Data inputsWhat data the system processesFinancial history, transaction data, employment status
Output typeWhat the system producesCredit score (0–1000), risk category, approval recommendation

Regulatory classification fields

FieldDescriptionExample
Risk classificationProhibited / high-risk / limited risk / minimal riskHigh-risk
Classification basisArticle and Annex referenceAnnex III, point 5(a) — creditworthiness
Article 6(3) exceptionDoes the exception apply? Document reasoningNo — AI output materially influences lending decisions
Your roleProvider / deployer / importer / distributorProvider
Personal data processedYes/no — and if yes, what categoriesYes — financial data, employment data, identity data
Special category dataAny Article 9 GDPR data processed?No
Affected personsWho is affected by the system's outputsLoan applicants (natural persons)
Number of affected personsEstimated annual scale~50,000 applicants/year
Human oversight mechanismHow human oversight is currently implementedLoan officers review all recommendations; override authority documented
Compliance statusNot started / in progress / compliant / non-compliantIn progress
Compliance ownerNamed individual responsibleMaria Gonzalez, Head of Compliance
Target completion dateWhen compliance should be achieved2026-07-01

Provider-specific fields (if you are the provider)

FieldDescription
Technical documentation statusNot started / in progress / complete
Annex IV sections completedWhich of the 9 sections are done
Conformity assessment routeSelf-assessment (Annex VI) or notified body (Annex VII)
QMS statusQuality management system established and documented?
Risk management system statusArticle 9 risk management operational?
EU database registrationRegistered / pending / not yet started
CE markingAffixed / pending
Post-market monitoring planEstablished / in progress / not started

Deployer-specific fields (if you are the deployer)

FieldDescription
Provider name and contactWho provides the system
Provider compliance verifiedEU declaration of conformity obtained? CE marking confirmed?
Instructions for useReceived and reviewed?
Log retentionAutomatic logs being retained for 6+ months?
Affected person notificationProcess in place to inform affected persons?
FRIA requiredYes/no — and if yes, status
FRIA completion dateWhen the fundamental rights impact assessment was completed
Provider contact for incidentsWho to contact for malfunctions and incidents

Step 4: Prioritise based on risk and exposure

Not all systems need equal urgency. Use this prioritisation framework:

Priority tierCriteriaAction timeline
P0 — ImmediateProhibited practices (Article 5)Cease immediately — already enforceable
P1 — UrgentHigh-risk systems where you are the providerFull compliance by August 2026 — start now
P2 — HighHigh-risk systems where you are the deployerDeployer obligations by August 2026
P3 — MediumLimited-risk systems (transparency obligations)Transparency measures by August 2026
P4 — LowMinimal-risk systemsVoluntary codes of conduct; no mandatory deadline

Within each tier, further prioritise by:

  1. Impact scope: Systems affecting more people or more vulnerable populations come first.
  2. Compliance gap size: Systems with the most missing documentation, controls, or processes.
  3. Enforcement visibility: Systems used in public-facing contexts or regulated sectors (finance, healthcare, education) that are more likely to attract regulatory attention.
  4. Fine exposure: Consider the potential penalties for non-compliance — up to EUR 15 million or 3% of global turnover for high-risk obligations. See the penalties guide.

Step 5: Maintain the inventory as a living document

An AI inventory taken in April 2026 is out of date by June 2026. AI adoption is rapid, tools evolve, and new use cases emerge continuously.

Regular review cadence

Review typeFrequencyScope
Full inventory reviewQuarterlyAll systems — verify accuracy, completeness, classification
New system intakeContinuousEvery new AI procurement or development project
Shadow AI sweepSemi-annuallyRepeat discovery methods across all departments
Vendor AI feature checkSemi-annuallyCheck existing vendors for newly added AI capabilities

Event-triggered updates

Update the inventory immediately when any of the following occurs:

  • A new AI system is procured, developed, or adopted (including shadow AI discovered after the fact).
  • An existing system is substantially modified — new model, new data source, new use case.
  • A system's intended purpose changes (this may change risk classification).
  • A system is decommissioned or a vendor contract is terminated.
  • A vendor adds AI features to an existing tool you use.
  • A department adopts a new AI tool, including free-tier or trial tools.
  • A risk classification changes (e.g., a system previously considered minimal-risk is now used in a high-risk context).
  • Regulatory guidance clarifies classification for a system type.

Governance controls

Build inventory maintenance into existing processes:

  • Procurement: Add an AI screening question to all new software procurement workflows. "Does this tool use AI or ML to generate predictions, recommendations, or decisions?"
  • IT onboarding: Include AI tool usage questions in employee onboarding and IT provisioning processes.
  • Change management: Require AI inventory updates as part of the change management process for system modifications.
  • Budget approval: Flag expense requests and purchase orders that match AI vendor names or keywords.

Common mistakes in AI inventory building

Mistake 1: Only inventorying systems you built

If you only list AI systems your company developed, you miss the vast majority of your AI footprint. Most organisations deploy far more third-party AI than they build. SaaS tools with embedded AI features are the most commonly overlooked. A company with 2 internally built AI systems might be deploying 15–20 third-party AI tools.

Mistake 2: Relying on IT alone for discovery

AI adoption is decentralised. Sales, marketing, HR, finance, and operations adopt AI tools independently — often using corporate cards, free tiers, or personal accounts. An inventory driven only by the IT department will miss shadow AI and department-specific tools. Involve every business unit and conduct cross-functional discovery.

Mistake 3: Treating inventory as a one-time exercise

An AI inventory taken once and filed away degrades immediately. New tools are adopted weekly. Vendors add AI features to existing products. Employees find new AI tools. Without ongoing maintenance processes, your inventory becomes a historical snapshot rather than a compliance tool.

Mistake 4: Not recording actual use vs intended purpose

The same AI system can be low-risk in one context and high-risk in another. A chatbot used for general customer FAQs is limited risk; the same chatbot repurposed for mental health triage may be high-risk. A recommendation engine used for product suggestions is minimal risk; the same engine used for benefits eligibility screening is high-risk. Record how each system is actually used, not just its marketed purpose.

Mistake 5: Skipping the "is it AI?" analysis

Teams either over-include everything (treating basic automation as AI) or under-include (missing genuine AI systems because they do not look like "traditional AI"). Apply the Article 3(1) definition systematically: does the system infer outputs (predictions, recommendations, decisions, content) from inputs using a learned or adaptive model? Document the reasoning for borderline cases.

Mistake 6: Not assigning ownership per system

An inventory without clear compliance ownership per system leads to diffusion of responsibility. Every system needs a named compliance owner — the person accountable for ensuring that system meets its obligations by the applicable deadline. This is typically the product owner (for provider systems) or the business unit lead (for deployed systems).

Practical template: inventory spreadsheet structure

For organisations starting from scratch, a structured spreadsheet is a practical starting point before graduating to a dedicated tool. Organise it as follows:

Tab 1: Master inventory — one row per AI system, containing all core identification and regulatory classification fields listed above.

Tab 2: Provider systems — filtered view of systems where you are the provider, with provider-specific compliance tracking fields.

Tab 3: Deployer systems — filtered view of systems where you are the deployer, with deployer-specific compliance tracking fields.

Tab 4: Discovery log — records of discovery activities (date, method, department, findings) to demonstrate due diligence in building the inventory.

Tab 5: Change log — records of all inventory changes (additions, removals, classification changes, status updates) with dates and rationale.

As your inventory matures, consider moving to a dedicated tool. See the compliance software tools comparison for options.

Frequently asked questions

How many AI systems does a typical company have?

More than you expect. Research from 2025 suggests that mid-size companies (500–5,000 employees) typically use 15–40 AI systems when embedded SaaS AI features are included. Large enterprises may have 100+. The biggest surprise for most organisations is not the internally built systems — it is the volume of third-party AI embedded in everyday tools.

Do I need to inventory AI systems used for internal purposes only?

Yes. The AI Act applies to AI systems "put into service" — which includes internal use. An AI system used internally for employee performance evaluation (Annex III, point 4) is high-risk regardless of whether it is customer-facing. Internal use does not reduce regulatory obligations.

What about AI tools used by individual employees on personal accounts?

If an employee uses an AI tool (e.g., ChatGPT via a personal subscription) in their professional capacity — meaning for work tasks — the organisation may still have obligations under the AI Act, particularly if the output influences decisions about natural persons. This is a grey area, but the conservative approach is to include such tools in your inventory and establish policies governing their use.

Should I inventory AI systems that are clearly minimal-risk?

Yes. Include them in the inventory even though minimal-risk systems have no mandatory obligations. Reasons: (1) classification can change if the use case changes, (2) a complete inventory demonstrates due diligence to regulators, (3) you need a baseline to identify when a new use crosses into a higher risk tier. Mark them as minimal-risk and move on — the documentation burden for minimal-risk systems is negligible.

How does the AI systems inventory relate to the EU database registration (Article 49)?

They are related but distinct. Your internal AI systems inventory is comprehensive — it covers all AI systems across all risk levels and all roles. The EU database registration under Article 49 is a public registration requirement that applies only to high-risk AI systems (and certain limited-risk systems). Your internal inventory feeds the EU database registration, but the inventory itself is broader.

Can we use Legalithm or similar tools to manage the inventory?

Yes. Dedicated AI Act compliance tools can help structure and maintain the inventory, automate classification, and track compliance status. However, the discovery phase — particularly shadow AI detection — requires human effort regardless of tooling. No tool can automatically discover every AI system in your organisation without cross-functional input.

Next steps

  1. Start with procurement records and IT asset inventories to capture formally approved AI systems.
  2. Run a shadow AI discovery exercise across all departments using the five methods described above.
  3. Check all existing SaaS tools for embedded AI features using the vendor table.
  4. Classify each system using the risk classification guide.
  5. Determine your role for each system using the provider vs deployer guide.
  6. Prioritise high-risk systems for immediate compliance action.
  7. Establish ongoing maintenance cadence and governance controls.

Run the free AI Act assessment to classify your AI systems and identify applicable obligations.

For article-by-article guidance, see the complete AI Act guide.

Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.

AI Act
AI Inventory
Compliance
Shadow AI
Risk Classification
AI Register

Prüfen Sie die Compliance Ihres KI-Systems

Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.

Kostenlose Bewertung starten