The EU AI Act has entered into force. From 2 August 2026, full obligations apply to providers and deployers of AI systems in Europe. For midmarket organisations with 200–5,000 employees, the challenge is not a lack of information — it is a lack of a concrete plan that fits your size and resources.
This guide is not a legal document. It is a pragmatic roadmap written for the IT manager, EA lead, or compliance coordinator who needs to translate the AI Act into action without a dedicated compliance team.
What the EU AI Act Actually Requires of You
The AI Act introduces risk-based regulation. This means the requirements depend on what kind of AI you deploy and what role you play: are you a provider (building AI systems) or a deployer (using others' AI systems)?
Most midmarket organisations are deployers. You deploy systems like Microsoft Copilot, AI features from your ERP vendor, or AI-driven recruitment tools. As a deployer, you have fewer obligations than a provider — but you are not exempt.
Deployer obligations for high-risk AI include:
- AI literacy programme: All employees working with AI systems must receive relevant training (Article 4).
- Human oversight: High-risk AI systems must have a defined oversight regime, including clear rules for when a human can intervene or stop the system.
- Usage description and technical documentation: You must document what the system does, who uses it, and under what conditions.
- Logging and traceability: Automatic logging of system activity to a relevant degree.
- Transparency toward affected individuals: Users interacting with AI systems must be informed.
For minimal-risk systems (spam filters, recommendation algorithms without individual profiling), requirements are much lower. Transparency recommendations are voluntary here.
The 20 Things You Need to Have in Order — A Prioritised Overview
The AI Act's requirements can be mapped to six categories. Here is a prioritised sequence for midmarket:
A. Organisation
A1 — AI responsible role: Appoint either an AI Officer or attach responsibility to an existing role (CTO, IT manager). Document who owns AI compliance.
A2 — Governance structure: Describe the decision-making process for onboarding new AI systems. Who approves? Who assesses risk? A simple decision matrix is enough to start.
A3 — AI policy: A written policy with purpose, risk framework, acceptable and unacceptable uses, and responsibility allocation. It does not need to be 30 pages.
A4 — Incident management: What do you do if an AI system fails, produces biased output, or is misused? A simple incident log and escalation procedure is the minimum.
B. Mapping
B1 — AI inventory: A register of all AI systems in operation. Include name, vendor, purpose, user group, and preliminary risk assessment. Shadow AI (e.g. unsanctioned ChatGPT use) must be included.
B2 — Risk classification: Classify each system as unacceptable / high-risk / limited risk / minimal risk. Use Annex III as reference for high-risk categories.
B3 — Usage description: For each high-risk system, write a brief description of the intended use, user group, and potential effect on affected individuals.
C. Training
C1 — AI literacy baseline: Assess current competence levels. Identify who needs training on what.
C2 — Training plan: Create an annual AI literacy training plan segmented by roles (decision-makers, users, IT).
C3 — Training documentation: Record completed training with date, participant, and content. A simple spreadsheet will do initially.
D. Risk assessment
D1 — FRIA for high-risk: Fundamental Rights Impact Assessment for all high-risk AI systems. A five-step assessment of effects on fundamental rights.
D2 — DPIA coordination: Coordinate with your data protection officer on overlap between DPIA and FRIA for systems processing personal data.
D3 — Technical documentation: For high-risk systems, build documentation per Article 11: system architecture, training data description, performance metrics, known limitations.
D4 — Automatic logging: Identify and implement logging requirements per system per Articles 12 and 26.
E. Vendors
E1 — Contract review: Review existing AI contracts. Ensure the vendor documents the system's Annex III status and provides relevant technical documentation.
E2 — AI due diligence: Introduce a due diligence checklist for new AI vendors.
E3 — Ongoing monitoring: Set up a process for receiving and responding to vendor updates on compliance status, system changes, and incidents.
F. GDPR and transparency
F1 — AI-generated output labelling: For systems producing output for customers or users, introduce visible marking of AI-generated content.
F2 — Human intervention: Document how human oversight is operationalised for each high-risk system.
F3 — GDPR coordination: Update privacy policies and RoPA (Register of Processing Activities) to reflect AI systems.
Prioritisation: Where Do You Start?
With 102 days to the deadline, sequence matters. A realistic prioritisation for a midmarket organisation starting from zero:
Weeks 1–2: Map and classify Start with the AI inventory. You cannot prioritise what you do not know exists. Use three questions per system: What does it do? Who is affected? What are the consequences of failure?
Weeks 3–4: Risk-assess high-risk systems Use the Annex III list and run a FRIA for systems that fall within high-risk categories. Focus on systems with direct human impact: recruitment, credit, health, education.
Weeks 5–8: Build governance foundation Appoint an AI responsible, write an AI policy, create an incident procedure. This is policy work that does not require technical resources.
Weeks 9–12: Documentation and vendor dialogue Obtain documentation from vendors of high-risk systems. Start building technical documentation for in-house AI systems.
Ongoing: Training AI literacy is not a one-time exercise. Start with the leadership level, work down to user level.
What is Realistic for a Midmarket Organisation?
Be honest with yourself: you do not have the resources to run AI Act compliance the way a large enterprise does. And you do not need to. Regulators look at proportionality. What is required of a 300-person manufacturing company is not the same as for a 3,000-person financial group.
The realistic goal is:
- An AI inventory that is complete enough: Not perfect, but covering the systems that actually have impact.
- Governance that is simple enough to maintain: A simple policy you actually use is better than an advanced policy nobody follows.
- Documentation that matches the risk: Focus your energy on high-risk systems. Minimal-risk systems require minimal effort.
- Processes that can be repeated: Incident management, due diligence, training — it must be simple enough that it actually happens.
Tools and Infrastructure
An AI inventory can start in a spreadsheet. But with ten or twenty systems, the structure begins to break down. Consider early whether you will use a dedicated AI governance tool that can:
- Keep the AI inventory structured and up to date
- Attach risk classifications per system
- Generate compliance documentation (technical documentation, FRIA, AI policy)
- Give an overview of governance status across systems
Such a tool does not need to cost enterprise prices. What matters is that it is designed for governance work, not just a spreadsheet with a fancy interface.
Sanctions and Oversight: What Happens If You Do Not Comply?
Violations of the AI Act can result in fines up to €35 million or 7% of global annual turnover — whichever is higher. That applies to providers and deployers of prohibited AI systems. For high-risk systems, the fine level is €15 million / 3%.
Oversight is handled by national competent authorities. In Denmark, the Datatilsynet is expected to play a central role, particularly for AI systems that process personal data. A dedicated Danish AI supervisory authority has not yet been established, but that work is under way.
The important point is not the size of the fines — it is that oversight will follow a risk-based approach. Authorities will prioritise organisations that openly ignore the law and systems with the greatest potential harm. Midmarket organisations that can document a good-faith effort — an AI inventory, an AI policy, a FRIA for high-risk systems — will be in a far stronger position than those who have not engaged with the issue.
What "Proportionality" Means in Practice
The AI Act has a proportionality principle that recognises SMEs and midmarket organisations cannot meet the same administrative requirements as global tech companies. Concretely this means:
- Simplified technical documentation for organisations that are deployers rather than providers
- Risk assessments that match the system's actual risk — an FAQ chatbot does not require the same documentation as a recruitment algorithm
- Flexibility in implementation of AI literacy programmes, as long as the outcome is documented
Proportionality is not a free pass. Your compliance must match your actual AI profile. If you have no high-risk systems, the requirements are minimal. If you have three high-risk systems, all three require full treatment.
What AI Act Actually Changes in Your Day-to-Day
The most underappreciated consequence of the AI Act is not the documentation requirements — it is the vendor dialogue. From 2026, your AI vendors will ask you: "Have you documented that you are using the system within its intended use?" And you will ask the same of new vendors: "Can you document that this system is not high-risk, or provide the required technical documentation?"
The AI Act permanently changes how AI systems are procured. That is a healthy change — but it requires you to be ready.
Next Steps
Start here:
- Complete the AI inventory template (download our checklist as PDF)
- Classify each system using Annex III as a guide
- Appoint an AI responsible — even a part-time role is better than none
- Write a simple AI policy — one A4 sheet with purpose, framework, and responsibility
- Start the vendor dialogue — contact your top-3 AI vendors and request AI Act documentation
The AI Act is not designed to stop AI innovation. It is designed to make AI deployment trustworthy. That is actually a reasonable goal — and one that benefits you, not just the regulators.
Governance is about knowing what you deploy, who it affects, and what you do when something goes wrong. That is good IT management, regardless of whether the law requires it.
Spekir builds the layer that connects strategy to the IT portfolio. See Atlas →
Related articles
Annex III Explained — When Is Your AI 'High-Risk'?
The eight Annex III categories explained with concrete examples from Nordic midmarket. When is your recruitment tool, credit scoring, or OT system high-risk under the EU AI Act?
8 min read →
Your AI Policy — 8 Sections You Cannot Skip
What must an AI policy contain? The eight mandatory sections, common mistakes, and what separates a policy that is actually used from one that lives in a PDF folder nobody opens.
8 min read →
DPIA and FRIA — Two Documents, Two Purposes
The difference and overlap between GDPR's DPIA and the AI Act's FRIA. When do you need which, who is responsible, and how do you avoid duplication with a coordinated workflow?
9 min read →