The Commoditization of LLMs
I want to start this post with a question: What do the following technology developments have in common?
What I see in each is a capital-intensive buildout of a game-changing technology that does not actually take effect until the service itself has been thoroughly commoditized. In other words, the greatest rewards of these buildouts do not go to those who underwrite them but rather to those who come after and leverage what has become the latest new “free” resource. Examples of such latter-day winners include:
All these companies inherited the infrastructure that they exploit—they did not build it out themselves.
Now, to be sure, there have been some hybrid successes—notably Amazon and Tesla. In both cases, led by CEOs who refused to compromise, these companies were able to leverage exceptional capex buildouts to achieve sustainable competitive advantages. That said, historically, long-term sustainable competitive moats have traditionally been built around proprietary architectures with high switching costs, the sort of thing that allowed IBM, Microsoft, and Intel to stay so long at the top. As strong as Amazon and Tesla are, neither enjoys such long-lived structural advantage—each has to continually renew its differentiation through innovation in competition with the next cohort of challengers.
OK, consider all the foregoing to be a prelude to the question I really want to pose: Aren’t LLMs the latest entry on the first list?
Recommended by LinkedIn
We know they do not have proprietary architectures with high switching costs because many of their enterprise customers are already integrating two or more models into the core of their offerings. And we know they have not yet commoditized because the amount of capex they require cannot support freemium market development at scale. Someone has to buy the tokens, and regardless of who that is, those payments must asymptotically approach zero for commoditization to occur, and until such commoditization occurs, the extraordinary benefits of this technology can only be realized in pockets.
Now, kudos to Anthropic for finding the first pocket. Claude coding is a game changer, and there is so much trapped value in the software development life cycle; it does not need to have a commoditized foundation to change the game at this stage. But long term, to deliver the social and economic impact it is destined to, it too must and will commoditize. But until LLMs arrive at this stage, things must proceed more slowly, making adoption, not innovation, the gating item in market development.
Now, if you are thinking that this is only a problem for the LLM providers building their frontier models, think again. Every industry wants and needs to leverage those models in agentic AI use cases; the ones that really do will change their game. Today, however, they have to find a way to either absorb or pass along the cost of tokens, which are still materially expensive. Absorbing the tokens alienates investors, whereas passing them along alienates customers. The latter, in particular, are understandably worried about blowing their compute budgets on tokens, something many have already experienced in their DIY experiments. In any event, someone has to absorb the risk, and right now, that is what is gating the speed at which the market opportunities can be exploited.
In that context, I just spent a day at a conference sponsored by Chargebee on pricing AI-enabled offerings. People are trying every trick in the book to find pricing models and mechanisms that can operate at scale during this time. The point is, at this stage, we have to be creative. We cannot adopt the Internet market development philosophy of the 1990’s, the one that said URL stands for Ubiquity Now, Revenue Later, because freemium only works atop an already commoditized substrate. Instead, vendors utilizing LLMs in their offerings, as well as the customers downstream, must control their investments and meter their expectations, focusing on use cases that can absorb AI in its pre-commodity state.
The people who are bearing the brunt of the risk right now are the LLM providers themselves, enabled by a web of financing institutions that are underwriting a capex unlike anything we have ever seen before. It is a Gold Rush mentality driven by a race to out-innovate the competition to enable unprecedented outcomes in virtually any digitally enabled workflow. I personally do not doubt they will achieve this goal—the technology itself is amazing, and the people who are creating it even more so. I also have no doubt the ROI to the world will be hugely positive. I just think the LLM investors themselves will have to be a lot more patient as to when those returns arrive for them and how large they will turn out to be.
That’s what I think. What do you think?
Geoffrey, this is the clearest framing I've seen on where we actually are in the LLM adoption cycle. Your point about adoption being the gating item, not innovation, resonates in energy and utilities. The interstate highway parallel is the one I keep coming back to. The companies that won weren't the ones who poured the concrete — it was those that understood the logistics patterns the highways made possible. In energy, the winners won't be the ones running the largest models. They'll be the ones who identified specific operational workflows — predictive maintenance, process automation, real-time grid balancing — where pre-commodity AI still generates enough value to justify the $'s. Your Anthropic/Claude coding example is good one. Energy has its own first pocket: the systems integration backlog that has paralyzed ERP modernization for a decade. The ROI math works today, even at current economics, because the alternative — another $50M failed migration — is so expensive that AI-assisted approaches just need to be less catastrophic than the status quo. The harder question for PE's, and boards: how do you distinguish portfolio companies that have found their first pocket from ones running pilots and calling it an AI strategy?
LLMs appear to be entering the same trajectory, where differentiation will gradually shift away from the models toward applications, workflows, and domain-specific execution 💡
Geoffrey Moore the pockets framing is the unlock for me. But one wrinkle: LLM providers aren’t waiting for commoditization. They’re running a “+1” play before reaching Main Street. Claude Design (a Figma/Adobe rival with Canva as the engine) and Claude Code show the pattern. LLM providers can build applications on top of their own substrate because the marginal cost is trivial compared to the buildout. So application-layer builders aren’t just competing with peers. They’re competing with the substrate provider deciding to occupy the pocket itself. The safest positions have proprietary workflow depth, regulated-industry trust, or customer-data moats the LLM providers can’t easily replicate. Curious whether the “+1” framework stretches to cover pre-commoditization moves like these. PS: Re-reading “Crossing the Chasm” in Betsy Sperry’s Entrepreneurial Marketing class at UW Foster School of Business. Such a great book.
Inflationary effects