AI ROI

Why 95% of AI projects fail to deliver ROI — and what the 5% do differently

The problem isn't the technology. It's what happens before the technology gets switched on.

Mark Pinnes · April 2026 · 8 min read

MIT's "The GenAI Divide" report surveyed 350 employees, interviewed 150 leaders, and analysed 300 public AI deployments. The finding: 95% of generative AI pilots fail to deliver measurable business value.

RAND Corporation puts the overall figure at 80.3%. Deloitte found that 42% of companies abandoned at least one AI initiative in 2025, with an average sunk cost of $7.2 million per failed project.

These numbers get quoted a lot. What gets quoted less is what the successful 5% actually did differently. Having deployed AI systems that produce daily business value, I can tell you the answer is surprisingly boring. It has nothing to do with choosing the right model or the right vendor. It has everything to do with three decisions that most companies get wrong before the technology is even switched on.

The three decisions that separate the 5% from the 95%

1. They scoped for the workflow, not the demo

Most AI projects start with a question like "What can we do with AI?" That question leads to pilots designed to impress a steering committee. A chatbot that answers product questions. A tool that drafts marketing emails. A system that summarises meeting notes.

The demo works. Everyone gets excited. Then someone tries to connect it to the way work actually happens, and it falls apart. The chatbot gives wrong answers about edge cases. The email drafts need so much editing they take longer than writing from scratch. The meeting summaries miss the decisions that matter.

RAND found that 33.8% of AI projects get abandoned before they reach production. Most of those looked promising in a demo environment.

What the 5% do instead: They start with a specific workflow and work backwards. Not "Can AI write our emails?" but "Our sales team spends four hours a day on proposal drafts. The first draft follows a pattern. The customisation requires judgement. Can AI do the pattern part and present a draft that a senior person can review in twenty minutes instead of two hours?" That's a scoping question with a measurable answer.

2. They defined where human judgement is non-negotiable

AI can produce volume. It's very good at first drafts, data processing, pattern matching, and repetitive tasks at speed. It is bad at knowing when something is good enough. It has no ear for quality, no instinct for what a customer will actually respond to, no sense of when the standard is wrong.

When the cost of production drops to zero, the cost of standing out becomes infinite.

The companies that deploy AI without a quality framework discover this the hard way. Output goes up. Quality goes down. Customers notice. The team spends more time fixing AI output than they saved by using it.

What the 5% do instead: They build the deployment around their best people's judgement. The person who knows what a good sales email sounds like defines the framework AI works within. The analyst who understands which data matters reviews the output. AI handles the volume. Humans hold the standard. The system produces at the maximum speed your best people can approve.

3. They measured from day one

MIT's data shows that organisations with clear pre-approval metrics achieve 54% success rates. Organisations without them: 12%.

That's a 4.5x difference in outcome based on a decision made before deployment starts.

54%
Success rate with pre-defined metrics
12%
Success rate without pre-defined metrics
+188%
Average ROI of successful AI projects ($5.1M invested, $14.7M returned)

What the 5% do instead: Before deployment, they answer four questions. What specific business outcome does this improve? What's the current baseline? What would success look like in 90 days? And how will we measure it? If they can't answer all four, they don't deploy.

Why 84% of failures are leadership failures

The Pertama Partners analysis broke down failure attribution. 73% of failed projects lacked clear executive alignment on success metrics. 68% underinvested in data governance. 61% treated AI as an IT project rather than a business transformation. 56% lost C-suite sponsorship within six months.

None of those are technology problems. They're all decisions made by people before the technology was involved.

The pattern is consistent: a senior leader gets excited about AI, approves a budget, delegates the deployment to a technical team, and moves on. The technical team builds what they were asked to build. Nobody checks whether what they were asked to build maps to a business outcome. Six months later, the project gets quietly shelved and $7.2 million has been spent.

What this means for your business

If you've deployed AI and the return isn't there, the most likely cause is one of the three problems above. Wrong scope, missing quality framework, or no measurement baseline. All fixable.

If you haven't deployed yet, you have an advantage. Getting these three decisions right before you start means you skip the expensive failures entirely. The companies that do this consistently report +188% ROI on their AI investments.

The gap between the 5% and the 95% isn't talent, budget, or technology. It's discipline about scoping, quality, and measurement. That's unglamorous work. It's also the work that determines whether your AI investment produces a return or becomes a line item that nobody wants to talk about at the next board meeting.

Find out where your AI investment stands

A thirty-minute diagnostic call will identify whether your AI spend has a return problem or a deployment problem.

Book a diagnostic call

Learn about our AI ROI workshops and deployment support →

← Back to Insights