
The pattern is always the same. A CEO reads a McKinsey report, watches a competitor's earnings call, or sits through a board meeting where someone says "AI strategy" four times. The next week, procurement gets a call: we need an enterprise AI platform. Budget: six figures. Timeline: yesterday.
Three months later, the platform is live. Nobody's using it. The data it needs doesn't exist in the format it requires. The team that was supposed to champion it is buried in their actual work. And the vendor is scheduling "enablement workshops" that feel a lot like apology tours.
This happens at nearly every company I work with. Not because they picked the wrong platform. Because they started in the wrong place entirely.
The inversion problem
Most AI adoption follows this sequence: pick a platform, find problems it can solve, convince people to use it. The companies actually getting value do the opposite: find the friction, pick the smallest tool that fixes it, expand from there.
The difference sounds subtle. It isn't. The first approach is technology-forward. The second is problem-forward. And in practice, the gap between the two is the gap between a $200K line item that nobody can justify at renewal and a $50/month tool that three departments refuse to give up.
I've seen a legal team save 15 hours a week with a simple document comparison workflow that cost nothing beyond their existing API access. Same company had a $150K "AI transformation platform" sitting unused in another department. The legal team found their friction first. The other department bought a solution first. Guess which one showed ROI.
Platform-first thinking is a procurement habit, not a strategy
Companies buy platforms because that's how enterprise software has worked for decades. You evaluate vendors, run an RFP, negotiate a contract, roll it out. The process is familiar. It has a Gantt chart. It makes sense on a slide.
But AI isn't like buying a new CRM or an ERP system. Those tools digitize existing processes. AI tools need to find the process worth changing before they can change it. When you buy the platform first, you're essentially buying an answer before you know the question.
The result is what I call "solution in search of a problem" syndrome. Teams get handed a powerful tool and told to find ways to use it. They dutifully build a few demos. The demos look impressive in the all-hands meeting. Then everyone goes back to doing their jobs the way they always have, because the demos solved problems nobody actually had.
What the backwards companies get wrong
Klarna is the poster child for aggressive AI adoption. They cut headcount dramatically, announced AI was handling the work of 700 customer service agents, and became the case study every AI vendor wanted to reference. Then they started rehiring. Not because AI failed, but because the customer experience degraded in ways the metrics didn't initially capture. The tool worked. The strategy of applying it everywhere, all at once, without understanding which problems it actually solved well, didn't.
This is the pattern. Companies treat AI adoption like a light switch, something you turn on across the organization. But AI capability is uneven. It's excellent at some tasks, mediocre at others, and actively harmful for a few. The companies that deploy it everywhere simultaneously discover this unevenness the hard way, usually through customer complaints or quality issues that take months to surface.
The 80% statistic from Databricks, that four out of five databases on their platform are now built by AI agents, is real and impressive. But Databricks didn't get there by telling every team to start using agents on day one. They built a platform where agents could work with clean, well-structured data. The infrastructure came first. The agents came second.
The data problem nobody wants to talk about
Here's the uncomfortable truth behind most failed AI deployments: the data isn't ready.

Only about a third of organizations have successfully industrialized their data operations. The rest are sitting on fragmented, inconsistent, poorly governed information spread across dozens of systems. Bolting an AI platform on top of that mess doesn't fix the mess. It just makes the mess more expensive.
I've watched companies spend six months and significant budgets implementing an AI analytics platform, only to discover that their customer data lives in three different systems with three different ID schemas that don't map to each other. The AI platform works fine. The data underneath it doesn't. And now they need another six months just to clean up the foundation they should have started with.
The unsexy version of AI adoption is this: before you buy anything, audit your data. Find out where it lives, how clean it is, whether it's accessible through APIs, and who owns it. That audit will tell you more about your AI readiness than any vendor demo ever could.
What the problem-first companies do differently
The organizations I see getting real value from AI share a few characteristics, and none of them involve buying an enterprise platform on day one.

They start with friction. Literally. They ask teams: what takes too long, what's repetitive, what's error-prone, what makes you stay late? The answers are never "we need a large language model." The answers are things like "I spend four hours every Monday reformatting data from the sales report into the format the finance team needs" or "reviewing these contracts for standard clause deviations takes three days per deal."
They pick the smallest tool that works. Sometimes that's an API call to Claude wrapped in a simple script. Sometimes it's a Zapier automation with an AI step. Sometimes it's a $20/month SaaS tool that does one thing well. It's almost never a six-figure platform commitment.
They measure before and after. Not "AI interactions per day" or "platform adoption rates," but the actual metric that matters: did the friction go away? Does the Monday reformatting still take four hours? How long does contract review take now? These are boring measurements. They're also the only ones that matter.
They expand from proof, not from strategy decks. When the legal team's contract review tool saves 15 hours a week, the procurement team notices and asks for something similar. Adoption spreads through demonstrated value, not through top-down mandates. This is slower than a platform rollout. It's also dramatically more likely to stick.
The $50 spreadsheet versus the $50K platform
There's a version of this argument that sounds like I'm saying companies shouldn't invest in AI. That's not it. The investment matters. But the sequence matters more.
A company that spends $50 cleaning up a critical spreadsheet workflow and sees immediate results has learned something invaluable: what AI adoption actually feels like when it works. That knowledge, the lived experience of finding friction, applying a tool, and measuring the improvement, is worth more than any amount of strategic planning.
From that $50 win, they can make informed decisions about the $500 tool, the $5,000 integration, and eventually the $50,000 platform. Each step is grounded in evidence from the previous one. The platform purchase, if it ever happens, is justified by a portfolio of proven use cases rather than a hypothesis about future value.
Compare that to the company that starts with the $50,000 platform and spends the next year trying to justify the purchase. Every use case they build is contaminated by the need to prove the platform was worth it, not by a genuine assessment of whether it's the right tool for the problem.
The uncomfortable question
If your organization is considering a major AI investment, ask this question first: can you name five specific, measurable friction points that AI would solve, and can you describe what "solved" looks like in numbers?
If the answer is yes, you're probably ready for a platform conversation. If the answer is "we'll figure out the use cases after we have the tool," you're doing it backwards. And you'll join the long list of companies that spent a lot of money on AI and can't explain what they got for it.
The companies getting real value from AI in 2026 aren't the ones with the biggest budgets or the most advanced platforms. They're the ones that started with a problem, picked a small tool, measured the result, and did it again. No transformation deck required.
That's not the story the vendors want to tell. But it's the one that actually works.