Before you can assess risks, document systems, or build a governance structure, you need to know what you have. An AI inventory — a complete register of all AI systems you use — is the foundation of all AI Act compliance.
It sounds simple. It is not. Most organisations underestimate the number of AI systems they actually deploy, because the definition of "AI system" is broader than most people intuitively think.
What Is an AI Inventory?
An AI inventory is a structured register that documents all AI systems in your organisation. "Systems" includes:
- Standalone AI applications you have purchased (e.g. recruitment tools, credit scoring)
- AI features integrated into platforms you already use (e.g. Copilot in Microsoft 365, AI suggestions in your ERP)
- AI models you have built or fine-tuned internally
- Generative AI services employees use in daily work
The EU AI Act's definition of AI systems is broad: "a machine-based system designed to operate with varying levels of autonomy… that can generate outputs such as predictions, recommendations, decisions, or content that influence real or virtual environments." That includes far more than most people think.
The Discovery Process: Find What You Do Not Know You Have
The biggest challenge for many organisations is shadow AI — AI systems used without IT's knowledge or approval. A 2025 survey found that 68% of employees use at least one AI tool not approved by their IT department.
A structured discovery process has four phases:
Phase 1: IT systems inventory Review your existing software inventory (your CMDB or IT Asset Management system). Identify all applications with known AI features. Contact vendors to confirm which AI features are active in your contract.
Phase 2: Business process review Interview representatives from each business area: "Do you use AI systems in your daily work?" Include questions about specific process areas: recruitment, customer service, finance, production, logistics, marketing.
Phase 3: Vendor review Send a simple query to your vendor base: "Does your solution include AI features? Which ones?" Many vendors add AI features as standard — you may already be deploying AI systems you are unaware of.
Phase 4: Shadow AI identification Use IT log data, anonymised browser history analysis, or voluntary self-reporting to identify uncontrolled AI services. Combine this with clear communication that the goal is registration, not punishment.
15 Questions Per AI System
Once you have identified an AI system, you need to collect a standard set of information. Here are the 15 most important:
Identification
- What is the system's name and version?
- Who is the vendor?
- When was the system deployed?
- Who is the internal system owner?
Purpose and use 5. What is the system's intended purpose? 6. Which business processes does it support? 7. Who are the primary users (number, roles)? 8. Does the system produce output used to make decisions about individuals?
Data and operation 9. What data is input to the system? (personal data? Sensitive categories?) 10. Where is data stored? (locally? Vendor cloud? Geography?) 11. What is the system's model architecture? (rule-based, machine learning, generative AI?) 12. Does the system receive regular updates that change its behaviour?
Risk and governance 13. Does the system fall under an Annex III category? (preliminary assessment) 14. What are the system's key dependencies? 15. Who is the contact person at the vendor for compliance questions?
Minimum Fields Per System in the Inventory
An AI inventory does not need to be a database. A structured spreadsheet with these fields is enough to get started:
| Field | Description | |---|---| | System ID | Unique identification number | | Name | System name | | Vendor | Company name + contact person | | Version | Current version/release | | Purpose | Brief description (1–2 sentences) | | Business area | Which department owns the system | | System owner | Internal contact person | | Risk level | Unacceptable / High-risk / Limited / Minimal | | Annex III category | Relevant category if high-risk | | Data types | Types of data the system processes | | Deployment date | When the system went live | | Last review | Date of most recent risk assessment | | Status | Active / Under assessment / Phased out |
Often-Overlooked AI Systems in Midmarket
Here is a list of systems many midmarket organisations miss in the first inventory round:
Microsoft 365 / Google Workspace AI features Copilot in Teams, Word, and Excel. AI-assisted search in SharePoint. Gmail Smart Compose. These are actively in use by most organisations with M365 or Google Workspace — and many do not know they have enabled them.
CRM and marketing automation HubSpot, Salesforce, and similar platforms have AI lead scoring, predictive send-time, and AI-generated content. Check your current settings.
ERP-integrated AI SAP, Dynamics, and Visma have all added AI features in recent versions. Anomaly detection, forecast models, and AI-assisted workflows.
HR and recruitment ATS systems with CV screening and candidate ranking. Onboarding platforms with AI personalisation. Performance management with AI-assisted assessments.
Customer service Chatbots and AI-assisted ticket routing systems. Voice AI in contact centres. Sentiment analysis on customer feedback.
Security and monitoring SIEM solutions with AI-driven anomaly detection. Email security with AI phishing detection. Access management with behavioural analytics.
Production OT Predictive maintenance via sensor data. AI-controlled quality control. Energy optimisation.
Maintenance: Inventory as a Living Document
An inventory updated only once per year is not an inventory — it is a historical artefact. Here is what should trigger an update:
- New vendor agreement with AI features: Add the system before deployment
- Major vendor update: Review whether AI functionality has changed
- Change in business use: The system is now used for a new purpose
- Incident or failure: Review and possibly update risk level
- Change in AI Act guidance: Authorities' interpretation can change classification
Recommended processes:
- Monthly review of new contracts and vendor updates
- Semi-annual review of the entire inventory with system owners
- Annual review with risk assessment and compliance check
What You Use the Inventory For
An AI inventory is not an end in itself. It is the foundation for:
- Risk assessment: Prioritising which systems require full compliance effort
- FRIA: You cannot conduct a Fundamental Rights Impact Assessment without knowing the system
- Vendor dialogue: The inventory gives you concrete conversations to have with vendors
- Training planning: Knowing who uses what helps you prioritise AI literacy training
- Audit documentation: Supervisory authorities expect to see an up-to-date register
Choosing a Storage Format
Start with the simplest format that works. For many midmarket organisations, this is a structured spreadsheet shared with relevant stakeholders. Advantages: low barrier, everyone knows the format. Disadvantages: difficult to maintain over time, no workflows, no automatic notifications.
Consider migrating to a dedicated AI governance tool when the inventory approaches 15–20 systems. The advantages of a dedicated tool:
- Risk assessment is integrated
- Document generation (technical documentation, FRIA) is supported directly
- Compliance status is visible across systems
- Version history for changes
Prioritisation: Which Systems Are Most Important to Map First?
Not all AI systems are equally important from a compliance perspective. Use this prioritisation order:
Priority 1 — Systems with direct individual decision effects Recruitment tools, credit assessment systems, performance evaluation tools. These are likely high-risk under Annex III and require full compliance effort.
Priority 2 — Systems processing large volumes of personal data CRM with AI, HR systems with AI analysis, customer data profiles with AI-driven segmentation. These require coordination with the GDPR compliance process.
Priority 3 — Systems with critical business significance Production management, supply chain optimisation, financial forecasting models. Not necessarily high-risk, but important to have documented.
Priority 4 — Generative AI productivity tools Copilot, ChatGPT Enterprise, Gemini. Typically not high-risk, but should be registered and acceptable-use rules defined.
When Is Inventory "Good Enough"?
A frequently asked question: "When is our AI inventory complete?" The answer is pragmatic: it is complete enough when it covers the systems with the greatest risk and business significance, and when there is a maintenance process keeping it up to date.
Perfect is the enemy of good. An inventory with 80% coverage that is up to date is worth more than a 100% inventory that is six months out of date.
Three signs of a "good enough" inventory:
- All high-risk AI systems are identified and have complete records
- All systems with direct individual decision effects are included (recruitment, credit, health)
- There is a maintenance process that automatically triggers when new systems are procured
First Steps — Tomorrow
Start with this:
- Send an email to all IT system owners with the question: "Do the systems you own include AI features?"
- Check your Microsoft 365 / Google Workspace admin settings for enabled AI features
- Review the five most recent vendor agreements for AI clauses or AI features
- Write down the first ten systems — that is enough to validate your inventory format
- Download the AI Act checklist and use it to prioritise next steps
An AI inventory with ten systems completed in two days is worth more than a perfect inventory planned for next quarter. Start today.
What you do not know you have, you cannot govern. Inventory is not bureaucracy — it is the foundation for all decision-making about AI in your organisation.
Spekir builds the layer that connects strategy to the IT portfolio. See Atlas →
Related articles
EU AI Act for Midmarket — What You Actually Need to Do
A pragmatic roadmap for the IT manager or compliance coordinator who needs to translate the EU AI Act into action without a dedicated compliance team. The 20 things, prioritisation, and what is realistic.
9 min read →
Annex III Explained — When Is Your AI 'High-Risk'?
The eight Annex III categories explained with concrete examples from Nordic midmarket. When is your recruitment tool, credit scoring, or OT system high-risk under the EU AI Act?
8 min read →
Your AI Policy — 8 Sections You Cannot Skip
What must an AI policy contain? The eight mandatory sections, common mistakes, and what separates a policy that is actually used from one that lives in a PDF folder nobody opens.
8 min read →