Gå til indhold
Spekir
AI

From 12 AI Experiments to One Direction

Founder, Spekir·Apr 16, 2026·7 min read
AI StrategyOperating ModelPrioritisation

Every organisation I talk to has the same story. They started experimenting with AI twelve to eighteen months ago. A few teams built prototypes. Someone in marketing uses ChatGPT daily. Finance explored document extraction. IT ran a proof of concept for ticket classification. Product has two models in production that nobody outside the team knows about.

The experiments are real. The results are often promising. But the direction is missing.

The Experiment Trap

Experimentation is a good thing. It signals that the organisation is curious, that people are trying to solve real problems, and that there is energy around AI. The problem is not the experiments themselves — it is the assumption that experiments will naturally converge into a strategy.

They will not. Without deliberate prioritisation, you end up with a portfolio of disconnected pilots that each serve a local need but collectively tell you nothing about where the organisation should invest.

This is not a failure of the people running the experiments. It is a structural gap. Nobody owns the question "which of these experiments should become capabilities?" because in most midmarket organisations, nobody has that mandate.

What Direction Actually Means

Direction is not a strategy document. It is not a 50-slide deck about the AI opportunity. It is the answer to three specific questions.

First: where are we today? What is our actual AI maturity — not aspirational, but real — across data readiness, process maturity, and in-house competencies? This is not a score on a consulting framework. It is an honest assessment of what you can actually sustain.

Second: which use cases should we pursue, and in what order? Not every promising experiment deserves investment. Prioritisation means comparing use cases by two axes — impact on the business and feasibility given current constraints — and being willing to park good ideas that are not the right ones for now.

Third: how will we operate AI as a capability? Who decides what gets built? Who owns the models in production? What happens when something breaks? This is not governance in the compliance sense — it is an operating model for AI as a permanent part of how you work.

The output should be one page per question. Three pages total. Anything longer is a sign that you are hedging.

Why Most AI Strategies Fail

The failure mode is almost always the same. Someone — usually the CTO or a consulting partner — produces a comprehensive AI strategy that covers everything from data infrastructure to ethics to talent development. It is thorough, well-researched, and impossible to execute.

The problem is scope. A strategy that covers everything prioritises nothing. When every initiative is "important", the organisation defaults to what is easiest rather than what matters most.

The other failure mode is the opposite: no strategy at all, just a mandate to "do more AI." Teams spin up projects without coordination. Budget gets allocated based on who asks loudest. Six months later, you have more experiments but no more clarity.

The pattern that works is narrow focus. Pick the three use cases with the highest ratio of business impact to implementation complexity. Build one of them to production quality. Use that experience to inform the next decision. Strategy is not a plan — it is a sequence of bets, deliberately ordered.

The Operating Model Question

This is the part most organisations skip, and it is the part that determines whether AI becomes a capability or stays a collection of experiments.

An operating model for AI answers practical questions. When a department wants to deploy a new AI tool, what is the process? When a model in production degrades, who notices and who fixes it? When a new regulation applies, who assesses the impact?

For a midmarket organisation, the operating model does not need to be elaborate. You do not need a centre of excellence or a dedicated ML ops team. You need clarity on three things: who owns the decision to deploy, who owns the system in production, and who owns the risk.

In most cases, the answers map to existing roles. The IT leader owns deployment decisions above a certain risk threshold. The team that built it owns the system. Risk and compliance reviews existing AI systems on the same cadence as other technology.

The operating model is not about creating new bureaucracy. It is about making explicit what is currently implicit — so that the next twelve experiments do not create the same coordination gap as the first twelve.

From Experiments to Capability

The difference between an organisation that experiments with AI and one that has AI as a capability is not technical sophistication. It is prioritisation and operating discipline.

Prioritisation means being willing to say no to eight promising experiments so you can properly invest in four. Operating discipline means someone owns the portfolio, reviews progress quarterly, and has the authority to redirect resources when a bet is not paying off.

None of this requires large teams or expensive infrastructure. It requires someone spending two weeks mapping what exists, assessing what matters, and writing down how decisions will be made going forward.

The output is not a strategy document that sits in a shared drive. It is a working artifact — a prioritised backlog, an operating model, and a decision log — that changes as you learn.

Start with clarity. The capability follows.


Spekir helps organisations move from scattered AI experiments to a clear, prioritised direction with concrete next steps.