Spreadsheet, Not Skynet: Microdoses, Not Microprocessors
The modest power relative to the size of the economy of GPT LLM MAMLMs as linguistic artifacts. Of course, since the economy is super huge, a relatively modest effect on it is still huge one. But more an Excel-class innovation than an existential threat to humanity. Or am I wrong?
I am pretty confident that “AI” will, in the short- and medium-run, have a minimal impact on measured GDP and a positive but limited impact on human welfare (provided we master our attention uses of ChatGPT and company rather than finding others who wish us ill using them to enslave our attention. I just said so, earlier today: J. Bradford DeLong: MAMLM as a General Purpose Technology: The Ghost in the GDP Machine <https://braddelong.substack.com/p/mamlm-as-a-general-purpose-technology>:
But in my meatspace circles here in Berkeley and in the cyberspace circles I frequent, I find myself distinctly in the minority. Here, for example, we have a very interesting but I must regard as weird piece from the smart and thoughtful Ben Thompson:
Ben Thompson: Checking In on AI and the Big Five <https://stratechery.com/2025/checking-in-on-ai-and-the-big-five/>: ‘There is a case to be made that Meta is simply wasting money on AI: the company doesn’t have a hyperscaler business, and benefits from AI all the same. Lots of ChatGPT-generated Studio Ghibli pictures, for example, were posted on Meta properties, to Meta’s benefit. The problem… is that the question of LLM-based AI’s ultimate capabilities is still subject to such fierce debate. Zuckerberg needs to hold out the promise of superIntelligence not only to attract talent, but because if such a goal is attainable then whoever can build it won’t want to share; if it turns out that LLM-based AIs are more along the lines of the microprocessor—essential empowering technology, but not a self-contained destroyer of worlds—then that would both be better for Meta’s business and also mean that they wouldn’t need to invest in building their own…
For the sharp Ben Thompson, he has inhaled so much of the bong fumes that for him the idea that the impacts of LLM-based AIs… along the lines of the microprocessor is the low-impact case. The high-impact case is that they will become superintelligent destroyers of worlds.
This seems to me crazy.
Most important, I have seen nothing from LLM-based models so far that would lead me to classify the portals to structured and unstructured datastores they provide as more impactful than the spreadsheet, but for natural-language interface rather than for structured calculation. And I see nothing to indicate that they are or will become complex enough to be more than that. Back in the day, you fed Google keywords, and it told you what the typical internet s***poster writing about the keywords had said. Poke and tweak Google, and it might show you what an informed analyst had said. That is what ChatGPT and its cousins do, but using natural language, and thus meshing with all your mental affordances in a way that is overwhelmingly useful (because it makes it so easy and frictionless) and overwhelmingly dangerous (because it has been tuned to be so persuasive).
(Parenthetically, Thompson’s conclusion that in the low-impact case Facebook “wouldn’t need to invest in building their own” models is simply wrong. Modern advanced machine-learning models—MAMLMs—are much more than just GPT LLM ChatBots. The general category is very big-data, very high-dimension, very flexible-function classification, estimation, and prediction analysis. 80% of the current anthology-intelligence hivemind share right now is about ChatBots, and only 20% about other stuff. But the other stuff is very important to FaceBook: Very big-data, very high-dimension, very flexible-function classification, estimation, and prediction analysis is key for ad-targeting. And FaceBook needs to keep Llama competitive and give it away for free to cap the profits a small-numbers cartel of foundation-model providers can transfer from FaceBook’s to their pockets.)
Now I see the GPT LLM category of MAMLMs as limited to roughly spreadsheet—far below microprocessor—levels of impact because I see them as what I was taught in third-grade math to call “function machines”: you give them something as input, and they burble away, and then some output emerges. start with the set of all word-sequences—well, token sequences—that can be said or written.