← Back to Blog
AI Strategy 12 Mar 2026 7 min read

Everyone's increasing their AI budget. Almost no one can explain why.

86% of enterprises plan to increase AI spend in 2026, according to NVIDIA's State of AI report. But when you ask what return they expect on that spend, the answers get vague fast.

NVIDIA's 2026 State of AI report dropped a number this week that stopped me mid-read: 86% of surveyed enterprises plan to increase their AI budgets this year. Nearly all the rest plan to hold steady. Virtually no one is cutting.

On one level, this is unsurprising. AI is the dominant technology investment story of this era, the competitive pressure to be seen to be moving is intense, and there's genuine value being unlocked in leading deployments. Of course budgets are going up.

But I've spent the last several months in budget conversations with clients across financial services, professional services, and consumer businesses. And when I ask a simple question — "what return do you expect on this investment, and how will you measure it?" — I encounter a consistent and somewhat alarming pattern. The number goes up confidently. The answer to my question does not come back with the same confidence.

The strategic FOMO problem

A meaningful portion of AI budget growth in 2026 is driven by strategic fear of missing out rather than calculated return on investment. Leadership teams see competitors announcing AI initiatives, read about productivity gains and cost reductions in the trade press, and feel genuine pressure to demonstrate they are not falling behind.

This isn't irrational. First-mover advantages in technology adoption are real. The cost of being late to a platform shift can be high. And the social pressure of being the executive who didn't invest in AI, if AI turns out to be as transformative as its proponents claim, is not trivial.

But FOMO-driven budgeting produces a specific pathology: spending that is hard to justify, hard to measure, and therefore hard to improve. If you don't know why you're spending, you won't know whether it's working.

"The organisations that will look back on 2026 as a turning point — rather than as the year they burned a lot of money — are the ones asking hard ROI questions now, before the budget is committed."

Where AI ROI is actually being realised

To be clear: there are organisations generating genuine, measurable returns from AI investment in 2026. The pattern I see in those organisations is worth examining.

First, they started with a specific problem, not with a technology. They identified a process that was high-cost, high-volume, and well-defined — and then asked whether AI could make it materially better. They didn't buy a platform and then look for problems to solve with it.

Second, they defined success before they deployed. They had a baseline: this process currently costs X, takes Y hours, and produces Z errors per thousand. They had a target: we need to reduce cost by A%, reduce time by B%, and reduce errors by C%. And they had a measurement plan.

Third, they treated the first deployment as a learning exercise, not a transformation. They scoped narrowly, measured carefully, and used what they learned to inform the next investment decision. The compounding effect of this approach — each deployment better informed than the last — is significant over 18 to 24 months.

A framework for AI ROI conversations

Before approving an AI budget line, ask three questions: (1) What specific outcome will this investment improve, and what is the current baseline? (2) How will we measure whether it has improved? (3) What is the minimum viable deployment that would let us validate the hypothesis before scaling? If you can't answer all three, the investment isn't ready.

The measurement gap

One reason ROI conversations get vague is that many of AI's benefits are genuinely hard to measure with traditional finance tools. How do you quantify the value of a faster first draft? Of a customer service agent that doesn't burn out? Of a compliance team that catches more issues because they're not drowning in manual review?

These are real benefits. But "real but hard to measure" is not the same as "unmeasurable." The organisations doing this well have invested in building the measurement infrastructure alongside the AI deployment — not as an afterthought, but as a core part of the program. They're tracking throughput, cycle times, error rates, escalation rates, and employee time allocation. They're doing pre- and post- comparisons with control groups where possible.

This is unglamorous work. It doesn't make for exciting conference presentations. But it's the work that turns AI from a cost centre into a capability.

The board question that's coming

Here's the dynamic that will catch many organisations off guard in the next 18 months. AI budgets have been approved with relatively light scrutiny, partly because the technology is new and partly because the competitive narrative is compelling. That grace period is ending.

As AI spend becomes a material line item — and for many large organisations, it already is — boards and CFOs will start asking for evidence of return. "We're investing in AI to stay competitive" will not be a sufficient answer for much longer. The organisations that have been building their measurement infrastructure will be well-positioned for that conversation. The ones that haven't will face difficult questions about a growing budget with unclear returns.

86% of enterprises are increasing their AI budgets. The ones that do so with clear success metrics and measurement plans will be able to point to evidence when the scrutiny arrives. The ones that are spending to avoid looking like they're not spending will be left hoping the scrutiny doesn't come too soon.

It's coming.

Connect on LinkedIn