The ROI Problem of AI: Dazzling Capabilities, But Powerful Market Incentives Blocking Bottom-Line Corporate-Profit Gains

& so: this is nuts! When’s the crash? I watch the race to build ever-smarter machines and think those hoping for immense profits for themselves are highly likely to wind up very disappointed. Think railroads in the 1800s. One railroad from place-to-place creates a fortune. Two railroads eke out a living. Three railroads bankrupt everyone…

Share


Give a gift subscription

“Artificial intelligence” seems to be coming everywhere, but its profits are not. Tch giants and ambitious startups flood the market with free or cheap AI tools. Yet the gap between value creation and value extraction yawns ever wider.

Consider who is really likely to get paid when the music stops.

The problem is not one of technical capability: MAMLMa now perform feats that would have seemed like science fiction a decade ago, from instant translation to sophisticated creative work. The problem is economic—and historical.

Consider the fate of the dot-com boom, with Microsoft’s decision to give away Internet Explorer for free; or the outcome of the streaming wars, or railroads in the 1800s. When marginal cost approaches zero and the big players have deep enough pockets, competition becomes a Red Queen’s race: everyone must run faster to try to avoid falling behind, and fail, while no one gets ahead—except for users smart enough to become masters of rather than attention-slaves to the technology.

Share Brad DeLong’s Grasping Reality


An interesting piece by Eric Koziol. My take:

For starters, I would say that you can almost never automate an individual’s job entirely—and when you do, you discover that they were also doing a lot of things necessary for your organization to function that were not their “job”:

Eric Koziol: The ROI Problem of AI <https://embracingenigmas.substack.com/p/the-roi-problem-of-ai>: ‘AI currently has an ROI problem. It is clear that AI can create value but proving and realizing that value is less clear.… Tension is occurring because a gap currently exists between the creation of value and the extraction of value…. Just because value can be easily created by AI does not mean you are able to extract the value easily. Value extraction comes mainly in…increased capability/market capture and decreased headcount. The problem with the first is that all of your competitors might be doing the same thing… The problem with the second is that you can’t always automate an individual’s job entirely. Let’s look at some examples…

Embracing Enigmas
The ROI Problem of AI
“It’s not about ideas. It’s about making ideas happen…
Read more

Leave a comment

But put that to the side for another day.

Instead, focus on this: Second, I would say that the problem is not that “all of your competitors might be doing the same thing…” by building-out their MAMLM capabilities. The problem is, rather, that some of your competitors are building-out their MAMLM capabilities with no intention of ever charging for any of it. This is NetScape’s old “Microsoft is going to give Internet Explorer away, for free, forever” problem. If your competitors are building-out MAMLM capabilities and charging for it, you have a business model if you can outperform them. If your competitors are building-out MAMLM capabilities and not charging for it, you don’t.

This building-out-and-not-charging phenomenon takes two forms:

  • Highly-profitable platform oligopolists with lets of market power and hence profits who see giving away MAMLM natural-language interface and other capabilities as a way of buying insurance against Clayton Christensenian disruption. Examples: Google, FaceBook MicroSoft, Apple, Oracle, SalesForce, and Amazon.

  • Wannabe platform oligopolists who think rapidly building-out as much as they can is a way to realize their dreams. Thus if you have a good MAMLM use case, OpenAI and Perplexity will grab it and roll it into their core offerings in the hopes that it will help them grow, and as they grow enough of their customers will subscribe to their $20/month “pro” plans that they actually have a business.

Between the deep-pocket platform oligopolists with more money than god, and the startups that have convinced overgullible venture capitalists that they have a chance of joining them, there are a lot of companies following the building-out their MAMLM capabilities with no intention of ever charging for any of it strategy right now. And there will be until the collapse of the AI bubble and the shakeout. So any company hoping to actually become a profitable business (rather than get acquihired by one of the big guys) needs to plan for a very long runway indeed. Think “streaming wars”, but more so.

What follows from this seems obvious, to me at least

The coming of MAMLM natural-language interfaces and related capabilities to information technology is poised to deliver a significant boon for user surplus. Powerful AI-driven tools—think ChatGPT, Google Gemini, or Perplexity—make previously expensive or inaccessible capabilities (such as advanced research assistance, coding help, or image generation) available to the masses, often for little or no direct cost. A classic case of technological progress expanding the economic pie for ordinary people, much as the arrival of the Internet did in the 1990s. The issue for them—us—is to use our powers to live wisely and well.

However, while users may find themselves the beneficiaries of a cornucopia of AI-powered services, the effect on profits and on the rational, fundamental values of stocks is likely to be quite the opposite. The economics here are not so different from what happened during the “streaming wars” or, for those with longer memories, the dot-com bubble, or even the “ruinous competition” of railroads in the 1899s. When competition is fierce, and the marginal cost of digital goods approaches zero, companies are incentivized to give away ever more value to attract and retain users, often at the expense of profitability. This “race to the bottom” dynamic can lead to a market saturated with loss-leading services, where only a few players (often those with deep pockets or unique technological moats) survive, and the rest either fold or get acquired. For investors, this means that the promise of AI as a profit engine may be illusory, at least until the inevitable shakeout occurs and equilibrium is restored—if it ever is.

The rational fundamental value of a stock, after all, is the present value of expected future profits; if profits are elusive, so too is the justification for sky-high valuations.

There is, of course, an exception: businesses that can provide MAMLM capabilities “on device” and do so cheaply stand to gain an enormous competitive advantage.

Apple, for example, is uniquely positioned here, given its control over both hardware and software, and its ability to integrate AI features natively into its ecosystem. Imagine a world where your iPhone or MacBook runs advanced language models locally, preserving privacy, reducing latency, and eliminating the need for expensive cloud infrastructure. This could allow Apple to offer premium AI features as part of its existing suite of services, locking in users and extracting additional value without incurring the ongoing costs that bedevil cloud-based competitors. In this scenario, the hardware vendor becomes the gatekeeper of the next generation of AI capabilities, much as Microsoft did with Windows in the 1990s.

Could. So far Apple has not covered itself in glory in attempting to grasp this opportunity.

Google, with its Android ecosystem and custom AI chips, might also be a contender. Or is it Samsung? Xiaomi? BBK? Are they the ones with the real opportunity.

(All of this, of course, rests on the crucial assumption that these technologies remain our servants, rather than becoming our “brain-hacking masters” bent on maximizing engagement to the detriment of user well-being. The history of digital platforms offers ample cautionary tales—from Facebook’s news feed to TikTok’s infinite scroll—where algorithms have been optimized not for the user’s benefit, but for the platform’s profit. The risk with AI-powered interfaces is that they could become even more adept at capturing attention, and not for our benefit.)

One need only recall the dot-com crash to see how these dynamics can play out. In the late 1990s, technologist-entrepreneurs blithely assumed that “the business model will come”—that is, that profits would inevitably follow from user growth and technological innovation. This optimism crashed headlong into the reality that, without a sustainable way to capture value, even the most popular services could not survive. Microsoft and other incumbents, with their deep pockets and ability to bundle services, wielded the no it won’t club against those who bet on the inevitable emergence of profitable business models. Today’s platform oligopolists—Google, Apple, Microsoft, Amazon, and their ilk—have vastly greater financial resources, making the stakes, and the potential fallout from an AI bubble burst, even larger.

In sum, the economic history of technology teaches us that user surplus often rises rapidly in the wake of innovation, but profits and stock-market values are another matter entirely. The winners will be those who can either operate at scale with minimal marginal costs, or who control the key chokepoints—be they hardware, operating systems, or proprietary data. For everyone else, the lesson is clear: plan for a long runway, and don’t count your profits before they have not just hatched but fully fledged.

Give a gift subscription


Now Andrej Karpathy believes that OpenAI and company will not be able to grab everything—that there will be niches, and very profitable niches, for near-bespoke “context engineering”:

Andrej Karpathy: <https://twitter.com/karpathy/status/1937902205765607626>: ‘+1 for “context engineering” over “prompt engineering”.

People associate prompts with short task descriptions you’d give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step:

Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting… Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial.

And art because of the guiding intuition around LLM psychology of people spirits. On top of context engineering itself, an LLM app has to:

- break up problems just right into control flows
- pack the context windows just right
- dispatch calls to LLMs of the right kind and capability
- handle generation-verification UIUX flows - a lot more - guardrails, security, evals, parallelism, prefetching…

So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term “ChatGPT wrapper” is tired and really, really wrong…

Tobi Lutke: <https://twitter.com/tobi/status/1935533422589399127>: ‘I really like the term “context engineering” over prompt engineering. It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM…

Perhaps. Perhaps not.

Get 50% off a group subscription

And Andrej Karpathy also sees a way for Apple (and Google? or Samsung, Xiaomi, BBK?) to superclean up using their on-device processing edge:

Andrej Karpathy: <https://twitter.com/karpathy/status/1938626382248149433>: ‘The race for LLM “cognitive core” - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn’t know that William the Conqueror’s reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can’t recite the SHA-256 of empty string as e3b0c442…, but it can calculate it quickly….

Mak[ing] up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty (“not your weights not your brain”). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so…

Again, perhaps and perhaps not.

For the student of economic history, this is a familiar story. Technological innovation expands the pie, but not every baker gets a bigger slice. The AI revolution will enrich our capabilities, but unless you own the oven—or the recipe—you may find yourself left with crumbs. The lesson: enjoy how much surplus your users are getting, but don’t expect easy or superlarge profits.

Refer a friend


References:

Leave a comment

Subscribe now

If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail…


#the-roi-problem-of-ai