Notes on the Berkeley "The AI Con" Book Launch Event
2025-10-02 16:00 PDT (Th): 470 Stephens Hall: Massimo Mazzotti, Morgan Ames, Alex Hanna, Khari Johnson, Tamara Kneese, & Timnit Gebru. The AI Con Book Roundtable…
So I went to see:
Mazzotti, Massimo, Morgan Ames, Alex Hanna, Khari Johnson, Tamara Kneese, & Timnit Gebru. 2025. “The AI Con Book Roundtable.” Center for Science, Technology, Medicine & Society (CSTMS); University of California, Berkeley, October 2, 4:00–5:30 pm, 470 Stephens Hall. <https://cstms.berkeley.edu/events/the-ai-con-book-roundtable/>.
Lots of interesting and true things were said. And I love the podcast of the book’s authors Emily Bender and Alex Hanna, “Mystery AI Hype Theater 3000” <https://www.dair-institute.org/maiht3k/>. And I have the book, and am going to read it straight through REAL SOON NOW. And we all owe a great cognitive debt to Emily Bender & al. (2021) <https://doi.org/10.1145/3442188.3445922> for coining the viral meme “stochastic parrots” as a metaphorical description of GPT LL MAMLMs—General-Purpose Transformer Large-Language Modern Advanced Machine-Learning Models.
But I came away very frustrated.
Why? Because there was a huge hole at the center of the panel discussion.
The hole? The hole was the near-complete absence of a description or a view of what “AI” is. There was great agreement on what it was not:
not a set of Turing-Class software entities,
not a research program that would lead to the construction of Turing-Class software entities,
and especially not something that would lead in short order to the creation of the DIGITAL GOD that is “ASI”—Artificial Super-Intelligence.
But very little on what it is. There was, if I recall correctly, one reference to “stochastic parrots”. There were two to “synthetic text extrusion machines”. There was one call I wholeheartedly endorse—there is a reason I write “MAMLM” rather than the wishful mnemonic “AI”—to, roughly (this isn’t a quore): break the spell, break the myth… [by] replac[ing] cases where they say ‘AI’ with ‘matrix manipulations’ instead, and how does that change how you think about it…
But that was pretty much it.
That, to me, does not cut it.
Yes, the AI hype boom is ludicrous and delusional. Yes, the castles in the air are insubstantial things made up of clouds in accidental arrangement.
But, still, there are signs and wonders on this earth.
There are real technologies being developed and deployed. There are things happening that have triggered and reinforced the very trange beliefs we see around us, beliefs that end with:
a remarkably large number of people whose judgment in daily life navigating our society seems quite good, but who now fervently believe that they are going to build DIGITAL GOD within the decade
a smaller but still remarkably large number of people who have societal power to commit hundreds of billions of dollars of real resources to enterprises, and have decided to commit them to the AI hype bubble build-out.
Thus it seems to me that there is one most important task for an AI-skeptic today. That is to explain what “AI” is, since we, or at least me and my karass, think we know what it is not, to help us see better why this is happening. What are those signs and wonders on this earth, really? What are the possible true ways of characterizing the technologies being developed and deployed? To lament that people are falling for the hype and wrong to do so is not, to my mind, sufficiently enlightening.
So I tried to provoke. I asked a question. This, below, is not my question, but rather an expanded version that I wish I had said:
Perhaps the most useful thing I can do now is to attempt to channel my friend Cosma Shalizi of Carnegie Mellon. He is one of the crankiest advocates I know of the “Gopnikist” line that these are distributed socio-cultural info technologies, and that the entire hype cycle is indeed extremely delusional. He views someone like an Eliezer Yudkowsky saying “shut this now or it will kill us all!” as the equivalent of someone raving about how library card catalogues must be placed inside steel cages RIGHT NOW lest they go feral. And he is equally dimissive of the other side of the doomer/boomer coin—that pairing was one of the great things about the book The AI Con, by the way. Consider Travis Kalanik boasting that his conversations with Claude have led him to the point of being about to make fundamental discoveries in physics.
Nevertheless, it is astonishing that this roiling boil of linear algebra can come as close as it can to genuinely passing the Turing Test. I mean, “It’s Just Kernel Smoothing” and “It’s Just a Markov Model” vs. “You Can Do That with Just Kernel Smoothing and a Markov Model!?!” The results are very impressive! And they were very unexpected!
Really old in ML dog-years are kernel smoothing in general. Really old in ML dog-years are Markov models for language.
But this!
What is it? It’s sorta like Google Pagerank, producing the next word a typical internet sposter would say in response to a prompt rather than the link a typical internet sposter would create in response to the keywords. It is definitely not a software entity carrying out human-level human-like thought. It’s is definiely not an embryonic DIGITAL GOD that just needs four more Moore’s Law-like doubling-cycles before it is too smart for us, and gets us all to do its bidding. It is not something that is going to get us all to do its bidding by flattering us. It is not something that is going to get us all to do its bidding by threatening us. It is not something that is going to get us all to do its bidding by hypnotizing us via its Waifu Ani avatar.
But it is definitely a something!
So what is it?
I need to know what is it before I can start to make sense of the hype cycle and the hype cycle’s remarkable persistence. I need an explanation of why people like Mark Zuckerberg have suddenly switched and become one of the chief boomer hypesters. Zuckerberg was pursuing a more cautious, open-source road. He had been having his organization build a natural-language interface to Facebook and Instagram that was good enough. He had been open-sourcing the core of LLaMa. Why? So that others would be unable to greatly monetize their own foundation models to fund the construction of competing social networks that might destroy his platform-monopoly profit flow. But then, in two months, he became the boomerest of boomer hypesters, the one who is going to spend more money than anyone else on ASI, on DIGITAL GOD.
What are he and others seeing in these GPT LLM MAMLMs that leads to these courses of action?
And the responses I got did not seem to me to hit the nail.