Rashomon-AI: Fear, Hype, & Platform Power? Or Broad-Based Productivity Gains Close Enough to Smell?
Monday MAMLMs: Where Will the Money to Pay for All the “AI” DataCenters Come From? Modern Advanced Machine-Learning Models (MAMLMs) will have value as very big-data, very high-demension, very flexible-function classification, prediction, and regression analyses. But will GPTs—Generative Pre-Trained Transformers—take over the modern world to the extent that even the internet did, or to the extent that a proper full-fledged GPT—General-Purpose Technology—typically does, let alone what the talk of a singularity or The Singularity would suggest. Yes, MAMLMs are real tools, but current everyday value is still narrow and incremental, not transformational…
Chad Orzel’s experience here is much closer to mine.
MAMLMs may be of great use, but they will not upend my workflow and daily experience, let alone that of people who are not part of the tech-clerisy sho make knowing about the latest computer-tech new new thing as an avocation and as parat of their vocation.
Except, this is, for:
reminding me of picky points of python syntax,
decoding python error messages,
summarizing
assisting as another and often very interesting set of eyes on an internet for which seo has made nearly all google searches massively unsatisfactory,
serving as a natural-language interface to trusted structured data stores,
and so on.
Chad Orzel:
Chad Orzel: How Useful Is the Big Bag of Words? <https://chadorzel.substack.com/p/how-useful-is-the-bag-of-words>: ‘Dipping into the roiling cauldron of linear algebra…. The theme of the 2025-26 academic year is clearly “Fretting About the Bag of Words”…. A five-question Google Forms survey… [producing] a Google Sheet with roughly 150 rows of content, not quite in the order of the talks…. This feels similar enough to the kinds of things I see people talking about doing with “AI” that it seemed worth a shot…. A complete hallucination. Wrong names, wrong number of columns, made-up comments. Last week, I tried it a second time and got more of the same hallucinated nonsense…. [But this] is more like it…. It did converge moderately quickly to the thing…. I wouldn’t call this a revolutionary development, by any means…. I’m not wild about the errors along the way…. On the other hand, if I were significantly less comfortable dicking around with spreadsheets, I’d probably find it much more impressive…. So, that’s my initial experiment with the roiling cauldron of linear algebra being sold as AI these days…. If I start to end up with more tasks in this vein, I would consider giving it another shot. And that’s the kind of squishy lukewarm reaction that is my signature as a blogger…
And:
Chad Orzel: My Sisyphean Relationship with AI <https://chadorzel.substack.com/p/my-sisyphean-relationship-with-ai>: ‘Administrative work… is… where “AI” systems would come closest…. Extracting a bunch of numbers from poorly-formatted data files… is the kind of numbing task that would benefit from some form of automation. The problem is, though, that the processes for which I have to do this kind of thing are both infrequent and relatively high-stakes: budgeting, staffing, reappointment and promotion reviews, etc. That means it matters that the numbers are right, which means I’m going to have to check them, and at our scale of operation, checking someone else’s answers isn’t all that much faster than generating the answers myself. So, again, it’s not a significant efficiency boost. And, of course, for tasks that need to be repeated, the act of going through it myself involves me learning how to do that thing, which makes the next iteration easier…. I keep finding myself in this state where I am at least in principle willing to give “AI” systems a try, but I can’t come up with a use case where I think they would be actually helpful…. So, back to the bottom of the hill I go…
But there is Ben Thompson:
Ben Thompson: Agents Over Bubbles <https://stratechery.com/2026/agents-over-bubbles/>: ‘The most compelling consumer applications… are Google and Meta’s advertising…. It was always unrealistic for OpenAI to think… consumers into subscribers…. Most people don’t want to pay for AI; it remains to be seen if they want to use it enough to make the ad model work…. [But] the enterprise market: companies have a demonstrated willingness to pay for software that makes their employees more productive…. I’m sympathetic to the argument that [in] the best companies… AI will… [be] replacing hard-to-manage-and-motivate human cogs in the organizational machine with agents that not only do what they are told but do so tirelessly and continuously until the job is done…. The weaknesses of LLMs are being addressed by exponential increases in compute…. The number of people who need to wield AI effectively for demand to skyrocket is decreasing…. The economic returns from using agents aren’t just impactful on the bottom line, but the top line as well…. Is it any wonder that every single hyperscaler says that demand for compute exceeds supply, and… is… announcing capex plans that blow away expectations?…
Ben Thompson, I think, largely agrees—except he is tending toward seeing a future world in which an AI-enabled tech-clerisy does effectively all the useful cognition-requiring word with its “agents”, while the rest of them—or is it the rest of us?—scramble for janitor and home-health jobs.
Begin with the warning that I can offer no warranty for my beliefs here. All I can give is my best current reading of a very uncertain future reality, where smart people I usually trust radically disagree and see radically different things going on. But this is not a failure of seriousness. Rather, it is a reflection of how complicated the world actually is. This is a very fallible, revisable contribution to an ongoing conversation, not as a tablet brought down from the mountaintop. And that leads to a meta-conclusionL in this line, right now, anyone offering you guarantees is selling snake oil.
Perhaps I am more cautious than most. I remember, after all, that I thought it highly likely Uber would be a bust—an investor‑overexuberance, a driver amortization‑misperception, and a regulatory triple play that, when push came to shove, would not pay its bills on its own. I was wrong. Uber did not crash and burn on the timetable I expected; enough capital was willing to subsidize below‑cost rides for long enough, and enough regulators were willing to look the other way for long enough, that the company successfully entrenched itself in urban transport systems around the globe. The equilibrium that emerged was not the clean textbook reversion to sanity I had anticipated, but something messier. That experience reminds me that my internal model can be badly calibrated, indeed and that technological change plus very patient capital can sometimes hold together arrangements that look, on my reading at least of basic first economic principles, unsustainable.
So I need to think carefully here. Am I once again underestimating the willingness of investors to fund a long march through losses? Or, conversely, am I at risk of learning the wrong lesson—taking one noisy data point, Uber, and universalizing it into a belief that any sufficiently well‑branded and well‑funded “platform” can defy gravity long enough to create new realities?
The first thing I grab onto is that, right now, everyone with a platform monopoly (except Apple) is working diligently and spending whatever is needed to eliminate OpenAI’s ability to exist anywhere near its consumer space. The cloud oligopolists have now sunk hundreds of billions of dollars into AI infrastructure. The economics of those large sunk costs all point in the same direction. They do not believe they can afford to risk letting any model provider sit between them and the user and harvest the application-layer rents.
Microsoft has already moved to treat OpenAI not just as a partner but as a direct competitor in AI and search. It formally listed it alongside Google and Apple in its 2024–25 competitive filings, precisely as OpenAI experiments with things like SearchGPT and other consumer-facing fronts that overlap with Copilot (CNBC). Google, for its part, is quite explicit that Gemini is not just a model but a stack of products meant to be woven into Android, Chrome, and every corner of the Google consumer empire. The rest of the pack is behaving similarly. Meta is pushing hard to make “Meta A” the default assistant across Instagram, WhatsApp, and Facebook, boasting of hundreds of millions of monthly users and pitching itself as the future “most used AI assistant” rather than as a neutral model supplier that politely sits behind other people’s branded front-ends (Meta). Amazon wants Alexa plus its own models to be the front door to online commerce.
Even Anthropic, which does not own an operating system or a huge consumer-facing platform, has made clear through its terms of service and moves up the stack that it would prefer its own application-layer rents rather than simply wholesaling intelligence to others.
Apple is the outlier not because it is friendly to OpenAI as such, but because it is playing a different game—hoping to fuse the model with the device and the local operating system, with cloud models treated as swappable back-end components rather than sovereign consumer brands.
“OpenAI as a widely loved cross‑platform consumer app” is not an equilibrium its nominal partners will long tolerate. They may internalize it. They may box it into the enterprise and API back‑end niche. They will do their very best to starve it of distribution. The history of Netscape-meets-Microsoft, rhyming, but this time with unbelievable scale datacenter investments added on.
That configuration of competitive reaction by the platform monopolists is itself creating a huge AI‑deployment and AI datacenter‑construction boom. Microsoft, Google, Amazon, and Meta all reached for the same lever: outspend everyone else on compute, networking fabric, and power‑hungry GPU farms, and then pull those capabilities deep inside their own clouds and consumer products. “Big Tech” may be on track to devote north of $500 billion in 2027 alone to AI‑related capex (Bloomberg). On the deployment side, this shows up not in graceful Schumpeterian competition among many small innovators but in a handful of firms turning entire regions into GPU‑powered company towns: clusters in Northern Virginia, Texas, and California drawing power on the scale of heavy industry, with AI‑driven data centers alone consuming more than 4% of national electricity in 2024 and on track to exceed the demand of many traditional manufacturing sectors by the end of the decade (Pew). This new GPT—this time “General‑Purpose Technology”—is not emerging organically but is being bootstrapped into existence by a concentrated investment wave driven by fear of being the one big player left without a chair when the music stops. That is the source of the huge AI-deployment and AI datacenter-construction boom.
And those booms are, in turn, causing a great many people to decide that now is a time for them to join the rush prospecting for this round of digital gold. They are reading the signal as “the hyperscalers think here is where the money is to be made”, rather than “we need to defend ourselves against Christensenian disruption”. The pattern is not unfamiliar: a real underlying technological opportunity, overlaid by narrative‑driven exuberance and a great deal of noise about who will own the future. What is distinctive this time is how tightly the goldfield is fenced. The upside that attracts the prospectors is real enough—productivity gains from better search, code generation, and workflow automation; new consumer applications; hopes of “AI copilots” everywhere. But most of the digital shovels and picks are being sold by, and most of the richest claims staked in advance by, the same handful of hyperscale platforms whose AI capex and model development are driving the boom in the first place. It is not rational for all to flood into AI startups, consulting practices, and speculative “AI‑enabled” business plans given that no more than a small fraction of them will ever earn back the opportunity costs of their time and capital once the dust settles.
What, really, after all, are people doing with their tokens that promises enough ultimate end-user value to actually pay the fully amortized datacenter carrying, depreciation, and power costs? It is a relatively modest picture:
chat interfaces that write emails and slide decks a faster,
copilots that help programmers refactor and remember syntax,
marketing departments spinning out more A/B‑tested ad copy,
plus a long tail of experimental use cases whose productivity payoff is, as yet, highly uncertain.
The optimistic story, much beloved by consultants and investor decks, is that GPT’s tokens (this time “Generative Pre-Trained Transformer”) are the front end of a GPT (this time “General-Purpose Technology”) that will REAL SOON NOW raise total factor productivity by measurable percentage points—if not by much more! Generative AI could add trillions of dollars a year to global GDP once it is fully diffused through customer operations, software engineering, and back‑office workflows (McKinsey)! If even a fraction of that prospective surplus were to a actually materialize and could be taxed or captured as profit, then today’s vast datacenter build‑out might, with hindsight, look wise.
The more cautious reading is that an innovation with real, but initially narrow, productive uses is wrapped in a utopian narrative, leveraged into an investment wave far ahead of demonstrated cash flows, and only much later do we discover how many of those tokens were buying genuine increments of human welfare and how many were merely postponing the reckoning on sunk costs.
Hence right now we are still in “something will turn up” mode; hence right now “WE ARE BUILDING DIGITAL GOD!!!!” is still playing an enormous role here as an energizer, for hard numbers do not yet justify the fervor.
And it is in that gap between present costs and hoped‑for benefits that the theology creeps in.
It is not an accident that industry leaders and their cheerleaders keep reaching for religious metaphors—“omnipotent superintelligence,” “creating god,” “second coming via silicon”—or that cultural critics now routinely note how we talk about AI with the language once reserved for deities and oracles (New York Times; Deus in Machina). That rhetoric does important economic work. It reassures investors that any current mismatch between returns and expenditures is temporary. because we stand on the cusp of an epochal transformation. That rhetoric encourages engineers, regulators, and the broader public to suspend normal skepticism, in the name of participating in a quasi‑sacred project. “Something will turn up” is, in this telling, but not because the spreadsheets add up. “Something will turn up” because one does not question Providence when a new DIGITAL GOD—for good or evil, Milton’s Jehovah or Milton’s Satan—is under construction.
However, Chad Orzel is the kind of person who ought to be an early adopter of useful MAMLMs that transform the daily workflow of an expert knowledge worker. And he is not finding that so.
This is, I think, a nontrivial fact. Orzel is a professional physicist, teacher, and explainer. Orzel is entirely comfortable with linear algebra, probability, and code. He is also definitely too online. He also has a low tolerance for bullshit (Quantum Is Not the Answer to AI).
If the tools we are currently hyping as “copilots for knowledge work” were already general‑purpose productivity enhancers, people exactly like him would by now have reorganized their workflows around them.
He has not.
That ought to make us very cautious about narratives in which the professional classes are already being transformed en masse by machine assistants.
The broader empirical backdrop points in the same direction. Surveys of students and academics regularly find very high rates of experimentation with generative AI—on the order of four‑fifths of respondents saying they have tried ChatGPT or its cousins—but the dominant use cases remain brainstorming, light editing, and summarization. The technology is present, often impressive, and heavily sampled; what it has not yet done—at least for people like Orzel—is cross the line from “occasionally handy adjunct” to “obviously indispensable infrastructure,” the way word processors and email did a generation ago.
The real questions are these: Who will a software ‘bot copilot be truly useful for? For whom will it be possible to run a department by orchestrating ‘bot agents rather than orchestrating a human team? At the level of running a department, the consulting literature is already fantasizing about the “superagency” manager who uses a stable of semi‑autonomous software agents to monitor projects, summarize status, draft communications, and even schedule and sequence work across a portfolio of tasks (McKinsey “Superagency” report). But that vision presupposes an environment where outputs are largely digital, interfaces are standardized, and performance can be measured in terms that bots can track: think of a product‑management group in a software firm, not a social‑work unit or a university department.
Thus the early evidence suggests a very uneven distribution.
The complementarity story looks very familiar: the technology augments those who already sit near the top of the organizational and skills hierarchy, and does much less for those whose work is either tightly scripted or requires rich, in‑person, tacit coordination. So far, relatively little benefit is visible for routine service jobs that are most exposed to automation narratives (OECD AI and skills).
Historically, when we have given managers new information technologies—railway telegraphs in the 19th century, MRP systems in the 20th—the immediate effect has been to increase the span of control and the centralization of decision‑making where quantification is easy, while leaving messy, qualitative domains to human discretion. There is every reason to expect this round will rhyme: software copilots will be truly useful for the already‑empowered orchestrators of codifiable work, and much less so for those whose job is to manage humans in all their unquantified variety and anxiety.
Thus to summarize: MAMLMs, especially GPTs (“Generative Pre-Trained Transformers”) are genuinely useful as flexible big‑data tools, but so far they look more like modest workflow aids for a tech‑savvy clerisy than a GPT (“General-Purpose Technology”) on the scale of electrification. The hyperscalers’ competitive scramble to prevent OpenAI‑style independents from owning new consumer interfaces is powering a massive, capital‑intensive data‑center. But that is more more like a defensive arms race than a harbinger of a rational expectation of massive future end‑user value. Thus much of the investor and corporate enthusiasm is being sustained by quasi‑religious “digital god” rhetoric and optimistic consultant projections. My bet is that AI “agents” will mostly amplify already‑powerful managers in highly codified, digital environments, rather than upend work for the broad mass of workers, or make bot‑run departments a near-universal reality.
This matters because trillions of dollars of capital spending, a reshaping of power and employment in the digital economy, and a growing share of global electricity demand are being justified by the “not a bubble” story. And that is diverting societal energy away from mundane but proven drivers of shared prosperity story, and tord overbuilding infrastructure, entrenching platform monopolies, and setting ourselves up for another 1859 or 1873 or 1999 or 2008.
AI-Constructed Reference List:
Bernstein, Joseph. 2026. “It Makes Sense That People See A.I. as God.” The New York Times, January 23. <https://www.nytimes.com/2026/01/23/style/ai-algorithm-god-religion.html>.
Day, Matt, and Annie Bang. 2026. “How Much Is Big Tech Spending on AI Computing? A Staggering $650 Billion in 2026.” Bloomberg, February 6. <https://www.bloomberg.com/news/articles/2026-02-06/how-much-is-big-tech-spending-on-ai-computing-a-staggering-650-billion-in-2026>.
International Energy Agency (IEA). 2025. Energy & AI. Paris: IEA. Especially “Executive Summary” and “Energy Demand from AI.” <https://www.iea.org/reports/energy-and-ai>.
Leppert, Rebecca. 2025. “What We Know About Energy Use at U.S. Data Centers amid the Artificial Intelligence Boom.” Pew Research Center, October 24. <https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/>.
MacCarthy, Mark. 2026. “What Happens When AI Companies Compete with Their Customers?” Brookings Institution, March 12. <https://www.brookings.edu/articles/what-happens-when-ai-companies-compete-with-their-customers/>.
Meta Platforms, Inc. 2024. “The Future of AI: Built with Llama.” Meta AI Blog, December 19. <https://ai.meta.com/blog/future-of-ai-built-with-llama/>.
Microsoft Corporation. 2024. “Microsoft Says OpenAI Is Now a Competitor in AI and Search.” Reported by Jordan Novet, CNBC, July 31. <https://www.cnbc.com/2024/07/31/microsoft-says-openai-is-now-a-competitor-in-ai-and-search.html>.
OECD. 2024. Artificial Intelligence & the Changing Demand for Skills in the Labour Market. Paris: OECD. <https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/04/artificial-intelligence-and-the-changing-demand-for-skills-in-the-labour-market_861a23ea/88684e36-en.pdf>.
Orzel, Chad. 2025. “How Useful Is the Big Bag of Words?” Counting Atoms, October 24. <https://chadorzel.substack.com/p/the-problem-of-the-bag-of-words-is>.
Orzel, Chad. 2026. “My Sisyphean Relationship with ‘AI’.” Counting Atoms, February 19. <https://chadorzel.substack.com/p/my-sisyphean-relationship-with-ai>.
Manyika, James, Michael Chui, et al. 2023. The Economic Potential of Generative AI: The Next Productivity Frontier. McKinsey Global Institute, June. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier>.
Smet, Aaron De, Laura LaBerge, & al. 2025. Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential. McKinsey & Company, January. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work>.
Thompson, Ben. 2026. “Agents Over Bubbles.” Stratechery. March 16. <https://stratechery.com/2026/agents-over-bubbles/>.
Wilson-Bates, Tobias. 2024. “Deus in Machina: AI & Divine Rhetoric.” North American Conference on British Studies Blog, February 26. https://www.nacbs.org/post/deus-in-machina-ai-and-divine-rhetoric
World Economic Outlook Team (McKinsey/QuantumBlack). 2024. “The State of AI in Early 2024: Gen AI Adoption, Impact, & the Road Ahead.” McKinsey & Company, April. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-2024>.
& how did it do?
