Skip to content
Spekir
EA

TIME Analysis Done Right

Founder, Spekir·Apr 16, 2026·7 min read
TIMEAPMStrategyPortfolio Management

The TIME model — Tolerate, Invest, Migrate, Eliminate — is the most widely used framework for classifying applications in a portfolio. It is simple, memorable, and maps directly to action. Every quadrant tells you what to do next.

And yet, most TIME implementations are essentially colour-coded spreadsheets. Applications get a label, someone makes a chart, and the portfolio review moves on. The classification exists but the decisions it should drive do not follow.

This is not a problem with TIME. It is a problem with how TIME is typically practiced.

The Scoring Problem

The foundation of TIME is two dimensions: business fitness and technical fitness. Business fitness measures how well an application supports the organisation's strategic needs. Technical fitness measures the quality, security, and maintainability of the technology.

The combination of these two scores places an application in one of four quadrants. High business fit and high technical fit means Invest. Low on both means Eliminate. High business but low technical means Migrate. Low business but high technical means Tolerate.

The problem begins with how scores are assigned. In most organisations, business fitness and technical fitness are estimated by one person — usually the IT leader or an architect — based on their general impression of the application. There is no structured scoring model, no defined criteria, and no mechanism for incorporating input from the people who actually use or maintain the system.

The result is a classification that reflects one person's mental model rather than structured evidence. That mental model may be largely correct, but it is not defensible, not repeatable, and not transferable to someone who does not share the same context.

What Good Scoring Looks Like

A structured TIME assessment should break each dimension into explicit sub-criteria.

For business fitness, the relevant questions are strategic alignment (does this application support a stated business objective?), business value (what would happen if we switched it off tomorrow?), and user satisfaction (do the people who use it daily consider it adequate?). Each of these can be scored on a simple scale — say 1 to 5 — and averaged.

For technical fitness, the questions are technical quality (how modern and maintainable is the stack?), security posture (are there known vulnerabilities, and is the platform actively maintained?), and operational health (reliability, performance, integration complexity). Same scale, same averaging.

This is not a complex scoring model. It is six questions per application, each scored on a five-point scale. A portfolio of 100 applications can be assessed in a single focused afternoon — especially if you pre-populate scores based on available data and only adjust where necessary.

The benefit is not precision. No scoring model perfectly captures reality. The benefit is consistency — every application is evaluated against the same criteria, making comparisons meaningful and making changes visible over time.

The Strategy Connection

This is where most TIME implementations fall short. The scores are assigned, the chart is drawn, and the result is a snapshot of the portfolio. But the snapshot is disconnected from the question that actually matters: does our portfolio support our strategy?

TIME becomes genuinely useful when it is connected to strategic context. If the organisation's strategy emphasises customer experience, then applications that directly support customer-facing capabilities should have higher strategic weight. If cost reduction is the priority, then applications in the Tolerate quadrant with high operating costs become urgent migration candidates.

Without this connection, TIME tells you what is healthy and what is not. With the connection, TIME tells you what to do and in what order.

The practical mechanism is capability mapping. When applications are linked to business capabilities, and those capabilities are linked to strategic priorities, the TIME classification becomes a decision engine. You can identify which capabilities are strategically critical but technically at risk. You can calculate the cost of maintaining applications that support non-strategic activities. You can prioritise migration based on strategic impact rather than technical discomfort.

Tracking Over Time

A single TIME assessment is a snapshot. Snapshots are useful, but they do not tell you whether the portfolio is getting healthier or worse.

Assessment history — tracking how scores change over time — transforms TIME from a periodic exercise into a management tool. When an application's technical fitness drops, that is a signal. When a migration candidate has been in the Migrate quadrant for three consecutive quarters without action, that is a different kind of signal.

Trends also make the governance conversation easier. Instead of presenting a static quadrant chart at the quarterly review, you present a trajectory. "We had 14 applications in the Eliminate quadrant last quarter. We retired 3, and 2 new ones appeared. Here is the plan for the remaining 13." That is a conversation a CIO can engage with.

The Cost Dimension

TIME classifications become significantly more useful when combined with cost data. The total cost of ownership — licensing, hosting, support, and allocated FTE — per application turns the quadrant chart into a financial instrument.

An application in the Tolerate quadrant costs the organisation money without strategic return. Quantifying that cost makes the case for rationalisation concrete. "We spend 1.2 million annually on 18 applications classified as Tolerate" is a statement that gets executive attention.

Similarly, the total investment in the Invest quadrant should correlate with strategic priorities. If your largest technology spend is on applications that support non-strategic capabilities, you have a portfolio alignment problem regardless of technical health.

Making TIME Work

TIME is not complicated. The model is elegant precisely because it is simple. Making it work requires three things that have nothing to do with the framework itself.

First, structured scoring. Six questions per application, consistently applied. Not perfect, but consistent.

Second, strategic connection. Link applications to capabilities, capabilities to strategy. Without this link, TIME is a health check, not a decision tool.

Third, regularity. Assess quarterly. Track changes. Report trends. A TIME classification that is updated once a year is a historical document, not a management tool.

The organisations that get the most value from TIME are not the ones with the most sophisticated scoring models. They are the ones that do it consistently, connect it to strategy, and use the results to drive actual decisions.


Atlas has TIME analysis built in — structured scoring, strategy alignment, cost impact, and quarterly trending in one view.