88% of organizations now use AI in at least one business function—up from 78% last year—but only 6% have achieved enterprise-wide transformation that's actually moving the needle on revenue and innovation.
So we have a situation where nearly every company has adopted AI, and nearly every company is failing to win with it. That is a fascinating and deeply uncomfortable result. It means that access to the technology isn't the constraint. Something else is going wrong.
How Jennifer Anniston’s LolaVie brand grew sales 40% with CTV ads

For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.
LolaVie used Roku Ads Manager to test and optimize creatives — reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.
The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.
Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.
The DTC beauty category is crowded. To break through, Jennifer Anniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.
AD
Everyone has it. Nobody's winning.
When every company has the same tools, the tools stop being the advantage. This seems obvious when you say it out loud. And yet the default AI strategy at most companies is essentially: buy the tools, run a pilot, declare progress. The press release writes itself. "We're an AI-forward company." Great. So is everyone else.
What you've done is rent a capability. The moment your competitor opens the same browser tab and signs up for the same subscription, your advantage is gone. This is not strategy. This is procurement.
The companies actually winning aren't using better AI. They're doing something that's harder to copy: rebuilding how they work from the ground up. Not adding AI to old processes. Replacing the processes entirely.
That distinction sounds small. It's enormous.
The question that changes everything
Most companies ask: where can we use AI?
The companies pulling ahead ask: where do we make the same decision fifty times a week, and what would this look like if we built it from scratch knowing AI existed?
The first question gives you efficiency. The second gives you a moat.
Efficiency is great. Everybody likes efficiency. But efficiency can be copied by anyone with the same software budget and an afternoon free. A rebuilt process, refined over twelve months, embedded into how your team actually works—that takes real time to replicate. By the time a competitor figures out you built it, you're already twelve months further ahead.
This is the compounding dynamic that separates the 6% from the 94%.
Three things your competitors can't buy
Three things compound over time in a way that's genuinely hard to replicate. None of them can be bought off the shelf, which is precisely why most companies don't have them.
The first is proprietary data. Every customer interaction, every decision your team makes, every outcome you've seen—that's data. Most companies let it evaporate. It lives in emails, in Slack threads, in the heads of people who eventually leave. The winners treat data capture as an operational discipline, not an IT project. They build systems that collect it, structure it, and feed it back into their models continuously. The result is something that looks like a small advantage in year one and an insurmountable one in year three. Bloomberg has been building its financial data moat for decades. OpenEvidence has exclusive licensing agreements with medical journals that general-purpose models simply cannot access. The model is the same one anyone can use. The data underneath is not. That asymmetry compounds every quarter you keep building it and every quarter your competitor doesn't.
The second is rebuilt workflows. Not prompts. Not a chatbot bolted onto a form that existed before AI was invented. A fundamentally different way of doing the work, designed from scratch around what AI can actually do, tested and refined until it runs without anyone consciously thinking about it. Most companies read about process transformation and nod. Almost none of them act on it, because rebuilding a workflow is slow, unglamorous work that doesn't make for a good press release. The companies doing it anyway are creating something invisible from the outside, which is exactly why it's defensible. Copying a tool takes an afternoon. Copying eighteen months of process iteration, the dead ends, the adjustments, the institutional knowledge baked into every step, is a different problem entirely.
The third is the feedback loop, and it's the one most companies don't even know they're leaving on the table. Every time someone on a team corrects an AI output, overrides a recommendation, or catches a mistake, that's a signal. It's information about where the model is wrong, where the process breaks down, where human judgment still matters. Companies that capture this systematically are getting measurably smarter every week. Companies that ignore it are running the same system they launched with, just on slightly newer hardware. The gap between these two groups is not linear. It's compounding. And it does not close by itself.
Data without stickiness is just a library nobody visits
Most writing about AI moats stops at proprietary data, as if owning a lot of it is sufficient. It isn't. Data is an input. What you build on top of it, and where you build it, is what determines whether anyone actually depends on you.
Think about what happened with GitHub Copilot. When it launched, there were already competitors with comparable or better underlying models. It didn't matter. Copilot was embedded in the editor developers already had open for eight hours a day. It didn't ask anyone to change their behavior. It just appeared inside the behavior they already had. That's not a technology advantage. That's a distribution advantage, and it turned out to be more durable than any benchmark.
The same logic applies internally. The AI tools that stick inside companies are the ones that become load-bearing. They stop being something you use and start being something work can't happen without. Once a system holds six months of institutional memory, decisions made, context captured, outputs refined, switching away doesn't just mean losing a feature. It means starting over. That's a switching cost a better-priced competitor can't easily overcome.
So the question isn't just what data do you have. It's: have you built something that people would genuinely struggle to leave? If the answer is no, you have an interesting database and a procurement problem.
Why most companies are still having the same conversation they had in 2023
The companies that are stuck aren't stuck because they haven't tried. They're stuck because they've been trying in a way that was never going to work.
The pattern is almost comically consistent. A pilot gets approved. It works beautifully in a controlled environment. Someone senior gets excited. Then it hits the real world, messy data, legacy systems, people who were never asked whether they wanted this, and it quietly falls apart. Leadership moves on to the next announcement. The cycle repeats.
The numbers on this are extraordinary. MIT's Project NANDA analyzed over 300 enterprise AI implementations and found that 95% of generative AI pilots fail to deliver any measurable business impact, against $30 to $40 billion of global investment. BCG surveyed 1,000 executives across 59 countries and found 74% of companies struggling to achieve and scale AI value. Gartner predicts 30% of GenAI projects will be abandoned entirely after proof of concept by end of 2025. These are not fringe findings. This is the consensus reality.
BCG has a framework that explains why. They call it the 10-20-70 principle: AI success is 10% algorithms, 20% data and technology, and 70% people, processes, and cultural transformation. Most companies have been spending their budget on the 10% and wondering why nothing compounds.
The exit isn't a better tool. It's a different kind of commitment. The companies generating real returns pick one workflow, throw actual resources at it, not a skunkworks budget and a part-time project manager, redesign it from scratch, and don't declare victory until the numbers move. Then they do it again. It is profoundly unsexy. It is also, apparently, very hard to do. BCG found that the companies getting this right are achieving 1.6 times greater shareholder returns than their peers. The gap is widening, not narrowing.
The only question that matters: If every AI model became free and equally capable tomorrow—same quality, same price, available to any competitor who wanted it—would the business still have an advantage?
If yes, something real is being built.
If no—and the honest answer for most companies is no—then everything done so far is table stakes. The technology works. Nothing has been built yet that the technology alone can't replicate. The window to change that is open, but it isn't infinite.
That's all for today. I'm going to go stare at my laptop and pretend I'm thinking strategically. See you Wednesday.
