AI Projects Fail because Dishwashers Don't Have Arms | Raf Alencar

AI Projects Fail Because Dishwashers Don't Have Arms

Why 42% of AI initiatives were doomed from the start

95%

of enterprise AI pilots fail to deliver measurable financial returns

Source: MIT NANDA Initiative, 2025

Recent research from MIT's NANDA initiative—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—reveals the stark reality: while some organizations (particularly younger startups) are seeing revenues jump from zero to $20 million within a year using AI, about 95% of AI pilot programs stall, delivering little to no measurable impact on the P&L.

And the situation is getting worse. According to separate analysis, 42% of companies abandoned most of their AI initiatives in 2025—more than double the rate from just a year earlier. That's billions in wasted investment.

The common diagnosis? Poor data quality. Lack of governance. No clear ownership. Missing business alignment.

These are real problems. But they're symptoms, not the disease.

The real reason most AI projects failed is the same reason dishwashers don't have arms.

The Humanoid Dishwasher Problem

Humanoid robot at sink vs. modern dishwasher illustration

Imagine you're trying to solve the problem of dirty dishes at scale. You could build a humanoid robot that stands at your sink, picks up each dish, scrubs it with a sponge, rinses it, and places it in the drying rack.

Same process. Different actor.

That's absurd, of course. Because that's not what we built.

We built dishwashers. Machines that look nothing like humans, operate nothing like humans, and redesigned the entire task of cleaning dishes from first principles.

The result? Dishwashers use less water than hand washing. They're more energy efficient. They clean more thoroughly and more consistently than humans ever could. Industrial dishwashing machines in restaurants are even further removed from "human replacement"—completely different architecture, completely different process, dramatically better results.

We didn't replace the human. We redesigned the work.

This Pattern Repeats Across Every Major Technological Shift

Every "wrong" approach below shares one thing: it preserves the old interface and swaps the engine underneath.

Technology Wrong Approach What Actually Worked
Steam Engine Ox-shaped machine pulling the same plow Redesigned agriculture entirely
Automobile Robot driver operating a horse carriage Redesigned transportation entirely
Self-Driving Cars Robot sitting in the driver's seat Sensors, cameras, new vehicle architecture
Car Manufacturing Robots mimicking human assembly workers 3D printing, flipping cars upside down, precision welding, chemical vat immersion
AI in Business Chatbot answering the same questions humans answered Still waiting for most organizations to figure this out
steam horse pulling the coach, and a steam ox farming illustration
humanoid robot driving a car illustration

The Lindy Trap

There's a name for this pattern: skeuomorphism—preserving the interface of the old tool because it feels familiar, even when the engine underneath has completely changed. It's paving the cow path. It's pouring new wine into old wineskins instead of new glass bottles.

The Lindy Trap: old interfaces surviving new engines

And there's a reason smart people keep making this mistake. It's the Lindy Effect of bad design: the longer a flawed structure survives, the more legitimate it feels—even when the environment that produced it no longer exists.

Ox-powered farming worked for millennia. Horse-drawn transport worked for centuries. Human-designed workflows worked for decades. Longevity feels like proof of fitness. But survival in the old environment doesn't mean optimality in the new one.

We don't fail at AI because AI is new. We fail because our processes are old—and feel safe.

Car manufacturing is the perfect example of this evolution. Early factory robots were essentially mechanical humans—standing at stations, performing the same motions a person would. That helped, but it wasn't transformation.

Real transformation came when manufacturers stopped asking "how do we replace the human at this station?" and started asking "how would we build a car if humans weren't a constraint?"

The answer: modular assembly, precision robotics, flipping entire car bodies upside down for better access, dunking frames into chemical vats, 3D printing components that couldn't be manufactured any other way. Processes that were literally impossible for humans to perform.

The car didn't succeed because it replaced the horse. It succeeded because it replaced the carriage.

Evolution of car manufacturing: robot mimicking human vs. car flipped upside down in modern factory

That's not robot-as-replacement. That's process redesign around what machinery can do better, it's doing away with the limitations that shaped the design of the initial process.

The Decisions Nobody Wants to Make

When AI stalls, the blame lands on regulation, the models, or "our data isn't ready." Safe targets, all of them. Nobody gets fired for bad data.

But research shows these explanations let everyone off the hook for the actual problem: the conversations nobody wants to have. Should we build this ourselves or partner with someone? Who decides what happens to the data? Who takes the blame if it fails?

These aren't technical problems. They're leadership problems disguised as technical ones. Leaders love the idea of AI transformation. They're less enthusiastic about the meetings where hard decisions actually get made.

The instinct is to build in-house. But companies that haven't done this 200 times are competing with vendors who have. And in AI, speed to production matters. That means admitting your team—however talented—might not be the right fit for this one. Most leadership teams would rather not have that conversation.

The Hotline Problem

Most "AI implementations" I've seen follow this pattern:

Before: Human does Task A → Task B → Task C → Decision

After: Human does Task A → asks AI chatbot about Task B → Task C → Decision

Sometimes, there's even a copy-paste from one AI tool to the next.

That's not transformation. That's giving someone a hotline.

Hotlines still require someone to:

Remember it exists, then remember to call it

Explain what your issue is, then know what question to ask

Interpret the response

Decide what to do with the answer

The process is identical. You just added a resource.

Is it really that more efficient to have AI tell you to turn your computer off and on again instead of teaching people how to do that first?

worker looking at computer chatbot illustration

But here's why leaders keep choosing this path: it's safer. Adding a chatbot preserves reporting lines, accountability structures, and political capital. Nobody's job changes. Nobody's authority is threatened. You can claim "AI adoption" without actually redesigning anything.

It's new wine in old wineskins—and the wineskins feel proven. It's safe.

And this is exactly what the research shows. MIT found that generic tools like ChatGPT "stall in enterprise use since they don't learn from or adapt to workflows." But more revealing: companies that purchase AI tools from specialized vendors and build partnerships succeed about 67% of the time, while internal builds succeed only one-third as often.

The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

The successful 5% didn't just add AI to their existing processes. They redesigned the work.

The Excel Parallel

"We use AI" is becoming this decade's version of "we use Excel."

When someone says their team uses Excel, that tells you almost nothing:

Marketing using Excel to track campaign metrics ≠ Finance building discounted cash flow models ≠ Operations managing inventory reorder points ≠ HR maintaining a headcount spreadsheet.

Same tool. Completely different implementations, skill requirements, and success criteria.

"We use AI" is just as meaningless. Are you using it to help write copy? Generate images? Decipher your boss's passive-aggressive emails? Classify thousands of transactions? Build behavioral proxy models from operational data?

These aren't the same project. They shouldn't be measured the same way. And they definitely shouldn't fail for the same reasons.

"Our AI project failed" will sound as absurd in ten years as "our project to introduce computers with Excel into our processes failed" sounds today.

When computers first entered offices, there were absolutely projects that failed. People struggled to adapt. Processes didn't change. The technology sat unused. But no company today would say "we tried computers, they didn't work for us."

They figured it out. They redesigned their work around what computers do better. And the ones who didn't? They're not around to complain about it.

The Infrastructure Trap

There's another failure mode that research consistently identifies: leaders systematically underestimate what AI actually demands.

Traditional capacity planning doesn't work here. Financial services organizations that project modest infrastructure cost increases often find actual costs exceed estimates by factors of three or four. Manufacturers roll out predictive maintenance and watch storage needs double every six months.

Why the miss? AI workloads don't behave like traditional apps. One successful use case spreads fast—and every new instance needs more compute. What worked fine in dev often falls apart at scale.

And agentic AI compounds this problem. A single user query can trigger dozens of internal AI calls, each burning tokens and compute. Traditional planning has no model for this.

Then there's obsolescence. Companies that spent months building custom RAG implementations are watching that work get commoditized by off-the-shelf solutions. What took six months to build can become irrelevant in six weeks.

The RACI Redesign

Here's the question most failed AI projects never asked:

If we were designing this process today, knowing what's now possible, what would it look like?

That's the dishwasher question. That's why dishwashers don't have arms.

Real AI transformation requires a full RACI redesign—not just "add AI to the flowchart."

Questions That Should Have Been Asked First:

  • Responsible: What tasks should the AI own entirely—not be consulted on, but be responsible for?
  • Accountable: What should humans be accountable for that they weren't before?
  • Consulted: Where does consultation actually happen now—and in which direction?
  • Informed: What does the human only need to be informed about rather than involved in?

The direction is becoming clear: humans move from doing to validating. The exact blueprint is still being written—but the organizations figuring it out aren't asking "how do we use AI?" They're asking "what should humans be accountable for once AI is responsible for execution?"

Most failed AI projects never redistributed the work. They just added a box to the flowchart that said "consult AI" and called it transformation.

What the Winners Are Doing Differently

Organizations achieving AI success follow a counterintuitive resource split: 10% on algorithms, 20% on infrastructure, 70% on people and process.

Most failures invert that—obsessing over the model while ignoring everything that makes it work.

They also build for flexibility. AI moves fast—today's cutting edge is tomorrow's legacy. Winners design with abstraction layers that insulate them from tech shifts. They don't bet everything on one approach.

Flowchart with 'consult AI' box vs. redesigned process flow

That's like adding "consult the steam engine" to your ox-powered farming process. You haven't transformed anything. You've just created confusion about who's actually pulling the plow.

The Real Failure

AI didn't fail. Old processes outlived their usefulness—and we trusted them anyway.

You can't blame the dishwasher for not being a better pair of hands. You can't blame the steam engine for not being a faster ox. And you can't blame AI for not making your existing processes marginally more efficient.

The 5% of organizations seeing real returns from AI aren't the ones with better chatbots. They're the ones who looked at their processes and asked: What would this look like if it was designed around what's now possible?

Research shows successful organizations force strategic clarity before touching a model:

1. What problem are we actually solving?

"Implement AI" is not a business objective. "Reduce customer service response time by 40%" is. "Use generative AI" is a technology choice. "Cut contract review cycles from two weeks to two days" is a business outcome.

2. Where does the data actually live?

What's too sensitive to leave your walls? What can go external under tight controls? What's fair game? That classification drives every implementation decision.

3. Who owns outcomes—not experiments?

Success means line managers and front-line teams driving adoption—not just the AI lab. When it stays in the hands of specialists, it stays in pilot purgatory.

That's the work. Not "how do we use AI?" but "what decisions are we trying to make better, and what would we do differently if we had the answer instantly, at scale, for every transaction?"

Until that question is answered, you're not implementing AI.

You're just building a humanoid robot to wash your dishes.

AI is revolutionary.

AI is exciting.

Implementing AI will give your company an unfair advantage over your competitors.

These are all true.

So maybe you should have asked your AI how to best redesign your processes to integrate it, instead of rushing to layer AI onto human-designed processes.

What process in your organization is still waiting for its dishwasher moment?

The one where you stop optimizing around human constraints and start redesigning around what's actually possible.

— Raf Alencar

Picture of Raf Alencar
Raf Alencar

Growth & Performance Leader | Customer Value, ROI & Scalable Growth through Analytics

Lastest Post