Annex III is the appendix to the EU AI Act that defines which AI applications are considered "high-risk" and therefore subject to the strictest requirements. Understanding Annex III is not a legal task — it is an IT management task. You need to be able to determine whether your systems fall within it.
This article walks through the eight categories, provides concrete examples from industries relevant to Nordic midmarket, and describes what "high-risk" actually means in practice.
What Is Annex III?
Annex III is a list of eight categories of AI systems that are automatically classified as high-risk. It is not an exhaustive list of all risky AI systems — it is the regulation's answer to "which applications are important enough to require documentation, oversight, and assessment?"
The categorisation is based on three criteria: the sector the system operates in, the system's function, and the potential harm to individuals' fundamental rights.
Important clarification: The high-risk classification applies to AI systems used as components in regulated products, or standalone AI systems that directly affect access to resources, rights, or services.
The Eight Categories
1. Biometric identification and categorisation
AI systems for remote identification, real-time or post-hoc biometric identification of persons in public spaces, and systems for categorising individuals based on biometric data (face, gait, voice).
Relevant for midmarket? Access control systems with facial recognition, time-stamping via biometrics. Many HR systems in Europe are already moving away from these technologies in response to the AI Act.
What it requires: Full high-risk compliance including FRIA, technical documentation, and human oversight.
2. Critical infrastructure
AI systems for managing and operating critical infrastructure — energy, water, transport, financial infrastructure.
Relevant for midmarket? Energy companies, water utilities, manufacturing companies with OT infrastructure. AI for predicting machine failures or optimising energy consumption can fall here, depending on context.
What it requires: Special attention to robustness and cybersecurity, as failures can have societal consequences.
3. Education and vocational training
AI systems used to determine access to education, assess student performance, or personalise education in ways that can affect career paths.
Relevant for midmarket? E-learning platforms with AI-driven assessment, onboarding systems that evaluate skill levels, and HR systems that screen candidates for internal training programmes.
What it requires: Clear documentation of assessment criteria, possibility of human review of AI decisions, and transparency toward those affected.
4. Employment and HR
AI systems for recruitment, application screening, performance assessment, promotion, or dismissal.
Relevant for midmarket? This is likely the category affecting most Nordic midmarket organisations. CV screening tools, ATS systems with AI ranking, or AI-driven performance reviews all fall here.
Example: An AI system that ranks job applicants based on CV analysis is high-risk. It directly affects individuals' access to employment.
What it requires: FRIA, technical documentation from the vendor, human oversight for all final decisions, and the ability for candidates to receive an explanation.
5. Access to and enjoyment of essential private and public services
AI systems used to assess creditworthiness, determine eligibility for public benefits, healthcare, housing, or insurance.
Relevant for midmarket? Financial institutions with AI-driven credit scoring systems, insurance companies with AI risk assessment, and healthcare companies with AI triage systems.
What it requires: Full documentation and transparency. Individuals have the right to understand the basis for decisions affecting them.
6. Law enforcement
AI systems used by law enforcement for individual risk assessment, crime analysis, and profiling.
Relevant for midmarket? Very low — primarily relevant for public authorities. Exception: security companies providing analytical services to police.
7. Migration and asylum management
AI systems used for border surveillance, risk assessment in asylum processes, or identity verification of migrants.
Relevant for midmarket? Low. Primarily relevant for public authorities and dedicated vendors for migration management.
8. Administration of justice and democratic processes
AI systems that assist courts with fact-finding, interpretation of legislation, or assessment of case outcomes.
Relevant for midmarket? Low. Relevant for legal tech companies and systems supporting the justice system.
What Is NOT High-Risk
It is equally important to understand what is not high-risk. The AI Act defines a "minimal risk" category with no formal obligations, just voluntary transparency recommendations:
- Spam filters
- Recommendation algorithms (Netflix-like) that do not affect access to resources
- Content moderation systems that do not discriminate
- Pricing algorithms for mass market products
- Productivity assistants (generative AI for text suggestions, coding, etc.)
An important point: General-purpose AI models like GPT or Claude are not in themselves high-risk systems. It is the specific deployment and application that determines the classification.
Self-Check: Is Your System High-Risk?
Use these three questions as a first filter:
-
Sector check: Does the system operate within one of the eight categories (biometrics, infrastructure, education, HR, financial services, law enforcement, migration, justice)?
-
Impact check: Can the system's output directly affect an individual's access to services, resources, employment, or public benefits?
-
Decision check: Do you use the system's output to make or inform decisions about specific individuals?
Three "yes" answers: Your system is likely high-risk and requires full compliance effort.
One or two "yes" answers: Review the category carefully with your legal advisor.
All "no": The system is likely not high-risk, but document the reasoning.
What High-Risk Classification Requires in Practice
For deployers (most midmarket organisations), high-risk classification requires:
- Technical documentation from the vendor: You are entitled to receive and retain documentation about the system's architecture, training data, and known limitations.
- FRIA (Fundamental Rights Impact Assessment): A five-step assessment of the system's potential impact on fundamental rights.
- Human oversight protocol: A written procedure for when and how an employee can intervene or override the system.
- Logging: Relevant system activity must be logged sufficiently to enable audit.
- Transparency toward users: Individuals affected by the system's decisions must be informed.
Provider Responsibility vs. Deployer Responsibility
It is important to distinguish between what the vendor is responsible for and what you as a deployer are responsible for.
Vendor responsibility (provider under the AI Act):
- Technical documentation of the system (Article 11)
- CE marking and conformity assessment for high-risk systems
- Registration in the EU database of high-risk AI systems
- Updating and maintaining compliance documentation
Your responsibility as a deployer:
- Receiving and retaining the vendor's technical documentation
- Completing the FRIA before deployment
- Implementing a human oversight protocol
- Informing affected individuals
- Ensuring the system is only used within its intended purpose
This means your due diligence process for AI procurement should include a requirement for the vendor to document the system's Annex III status, and that you receive the technical documentation you are entitled to.
What Applies to Systems You Build Internally?
Many midmarket organisations build or fine-tune AI models internally — typically using platforms like Azure OpenAI, AWS Bedrock, or Google Vertex AI. What applies to these systems?
If you build an AI system that falls under an Annex III category, you are a provider under the AI Act — not merely a deployer. This entails far more extensive requirements: conformity assessment, CE marking, registration in the EU database, and full technical documentation including training data description.
For most midmarket organisations using pre-trained models and adapting them via prompt engineering or retrieval-augmented generation, the classification is less clear. Generally: adaptation via prompting does not in itself change the classification. Fine-tuning on your own proprietary data for a specific high-risk application may make you a provider — consult legal counsel if in doubt.
When Annex III Is Updated
The AI Act gives the Commission authority to update Annex III via delegated acts. The list of high-risk systems can therefore expand over time. The EU AI Office continuously monitors technology developments and market practices.
For you as an organisation: AI inventory and risk classification are not one-time tasks. They are ongoing processes. Schedule a semi-annual review from 2026.
Practical Conclusion for Midmarket
Most Nordic midmarket organisations will have one to three high-risk systems in operation — typically in HR (recruitment), possibly in financial risk assessment, and potentially in OT management in manufacturing companies.
Start with HR and recruitment tools. This is the category that affects the broadest range of midmarket organisations and is easiest to identify since most ATS vendors will have public statements on AI Act compliance status.
Contact your vendors and request: (1) a statement of the system's Annex III status, (2) technical documentation you are entitled to as a deployer, and (3) a description of the human oversight regime.
Document the reasoning for systems you classify as "not high-risk". Supervisory authorities may ask — and "we didn't know" does not reduce liability. A short written rationale stored alongside your AI inventory entry for each system is sufficient.
Annex III is not designed to ban these systems. It is designed to make them accountable. That requires paperwork — but the paperwork may be the most important thing you do to protect yourself and your employees.
Spekir builds the layer that connects strategy to the IT portfolio. See Atlas →
Related articles
EU AI Act for Midmarket — What You Actually Need to Do
A pragmatic roadmap for the IT manager or compliance coordinator who needs to translate the EU AI Act into action without a dedicated compliance team. The 20 things, prioritisation, and what is realistic.
9 min read →
Your AI Policy — 8 Sections You Cannot Skip
What must an AI policy contain? The eight mandatory sections, common mistakes, and what separates a policy that is actually used from one that lives in a PDF folder nobody opens.
8 min read →
DPIA and FRIA — Two Documents, Two Purposes
The difference and overlap between GDPR's DPIA and the AI Act's FRIA. When do you need which, who is responsible, and how do you avoid duplication with a coordinated workflow?
9 min read →