From Aiskhylos to AI-Slop: Navigating the Cognitive Earthquake of Modern Advanced Machine-Learning Models

Modern Advanced Machine-Learning Models—MAMLMs—are already an off-scale financial-econo tech-sector earthquake as well, and they will become a classification-analysis earthquake as they make possible ways of understanding the blooming, buzzing confusion of the world that were previously far beyond our grasp. For now, however, I tihnk it is more fruitful to focus on how the ability to construct natural-language interfaces to structured and unstructured databases is changing the ways, as Vannevar Bush put it nearly a century, how we may think…

Share


Share Brad DeLong's Grasping Reality

Share Brad DeLong's Grasping Reality


GPT LLMs—General-Purpose Transformer Large Language Models—are, in and of themselves, natural-language interface access to structured and unstructured databases.

This is a wonderful information-age cognition superpower, given all of our human, all too human, mental affordances.

GPT LLMs, starting with the surprise consequences of what was supposed to be the mere technology demonstration that was the release of ChatGPT 3 by OpenAI, has been an off-scale earthquake in the econo-financial space of the tech sector and beyond, and not because anyone sees any easy way to make really big money from it. Rather, people benefiting from current money flows fear their adverse rearrangement.

MAMLMs—Modern Advanced Machine-Learning Models—also are powerful tools for huge-data enormous-dimension highly-flexible classification analyses.

Thus “AI”—“Artificial Intelligence”—in its current meaning this year is also a source of great confusion. People meld the three, the natural-language interface, the financial-economic, and the classification, and claim they are going to be AGI or ASI—Artificial General Intelligence or Artificial Super-Intelligence—any day now. I think we should pull these apart, and consider them separately. For we do not yet have even a glimmer of the size, scale, and consequences of the second or the third, while I see glimmers of potential understanding on our part of the consequences of the first.

And on this first, we have this, from Henry Farrell:

Henry Farrell: Large language models are cultural technologies. What might that mean? <https://www.programmablemutter.com/p/large-language-models-are-cultural>: ‘Push… Singularity thinking to one side… changes to human culture and society are at the center…. Specific ways of thinking… about the… consequences… [of] LLMs….

Gopnikism…. The collective system of knowledge through which human beings gather and transmit information…. Gopnikism considers LLMs… [as] they “codify, summarize, and organize… information in ways that enable and facilitate transmission.” Equally, they may distort…. Manipulating cultural templates and conventions at scale can provide a lot of value added…. Individual human[s with]… very limited problem solving capacities… outsource a lot of the action to bigger social systems.…

Interactionism…. Cultural constructs… accord with… society… [or our] mental modules…. Gods, demons, dryads, wood-woses… arise and persist… because they click with… cognitive modules…. How will humans interpret the outputs of a technology that trips our cognitive sensors for agentic intelligence without having any?… What is the cultural environment going to look[?]… How are human beings… likely to interpret and respond[?]… And what kinds of feedback loops are we likely to see[?]…

Structuralism…. “Could we also discover genuinely new cultural forms and new ways of thinking?… Externalizing language, and fixing it in written marks, eventually allowed us to construct new genres (the scientific paper, the novel, the index) that required more sustained attention or more mobility of reference than the spoken word could support.… Examine… LLMs… emphasiz[ing] the collective structures from which meaning emerges.

Roleplay…. Murray Shanahan, Kyle McDonell and Laria Reynolds…. The LLM has a massive possible set of characters, situations or approaches…. will… narrow in… as dialogue proceeds…. Many of these… are cliches…. LLMs… have a hard time keeping on the right narrative tracks…. Much of the fine tuning and reinforcement learning and many of the hidden prompt instructions are designed to lay better tracks…

Give a gift subscription

And so here, I think, we are making some progress:

The fourth of these—roleplay—is about adding another modality, talking to an LLM, to our current communication-information panoply. That panoply consists of things like: listening to a lecture, reading a book, watching a play, doing a problem set, and so on. In practical terms, engagement with large language models becomes yet another channel through which individuals and groups can interact with, learn from, and even co-create knowledge. And the question is: How will this matter?

For centuries, intellectual development has been shaped by the modalities of oral debate (think Socratic dialogues), textual analysis (the slow digestion of Plato, Aristotle, or Marx), and performative representation (the staged tragedies of Aiskhylos or Shakespeare). Is the addition of LLMs to this mix a merely incremental change, or a qualitative shift?

In the future, adding to the ways we may think is that we can interrogate a simulated interlocutor—an externalized, algorithmic “mind”—drawing upon the entire corpus of digitized human knowledge, responding in real time, and adopting whatever rhetorical stance we choose. It is key to try to grasp how these digital roleplays might reshape our cognitive habits, and our educational institutions. It may even transform our collective sense of what it means to “know” something in the twenty-first century. This capacity to spin-up an external SubTuring Simulacrum of another mind in a particular situation may lead to a world in which it becomes routine to interact with sophisticated LLMs emulating personalities and reasoning processes. If kept “on track” they offer opportunities for education, empathy, and intellectual exploration. Yet maintaining the fidelity and utility of these simulations, avoiding narrative derailment or the mere amplification of cliché, and ensuring that the benefits—insight, creativity, even wisdom—outweigh the risks of confusion, distortion, or epistemic drift is not going to be an easy. Renegotiating the boundaries of authorship, authority, and understanding in the age of the SubTuring Simulacrum may simply end with all of us drowning in the AI-slop.

Closely related is the third of these—what Farrell calls “structuralism”. With it we invoke once again Walter Ong on the transition from “orality” to “literacy”. Walter Ong was important for the age of mass audiovisual communication even before the coming of modern information technology. He became more important with the coming of the internet information-communications revolution. He is becoming even more important now.

No, I do not mean “important” in the sense that it gives us the opportunity to engage in hand-wringing about the state of social media, and the fear that Tik-Tok will somehow return us to a pre-literate, tribal state.

Ong’s Orality and Literacy (1982) charted how the written word enabled new forms of abstraction, argumentation, and cultural memory, transforming the very structure of human thought. Now we have already seen an explosion of hybrid modalities that we have been struggling to digest and undertand—miniseries that serialize narrative complexity, videogames that offer interactive world-building, weblogs that democratize intellectual discourse, and platforms like YouTube and TikTok that blend performance, commentary, and algorithmic curation. Each of these genres has reconfigured the boundaries between creator and consumer, text and context, audience and performance.

The arrival of LLMs promises yet another structural transformation: a medium in which textuality, orality, and interactivity are fused, and meaning emerges from the collective, iterative interplay between human and machine. What new genres, cognitive habits, and social institutions will crystallize from this ferment? Will LLMs nurture forms of sustained inquiry and creative synthesis, or will they accelerate the fragmentation and ephemerality that mark our current media landscape? Urgently gaining the answers that for now remain out of reach.

The second of these—interactionism—zeroes in on a perennial quirk of human cognition: our tendency to anthropomorphize, to see agency and intention in the inanimate, the algorithmic, and the abstract.

This cognitive shortcut is ancient; lightning becomes Thor’s hammer Mjöllnir striking from the heavens, while the thunder is the rumble of his wheels of his goat-cart drawn by Tanngrisnir and Tanngnjóstr. In psychological terms, this is the activation of our “theory of mind” modules in contexts where no actual mind exists—a phenomenon that has shaped myth, religion, and now, our relationship with technology.

The arrival of LLMs intensifies this spiral.

Users routinely ascribe personalities, motives, even moral qualities to outputs generated by statistical pattern-matching. The risk is not trivial. Consider the proliferation of “AI companions” that foster emotional attachment, or the viral spread of misinformation when algorithmic outputs are mistaken for authoritative human judgment. Yet this flaw also contains the seeds of opportunity. If we can learn to recognize and modulate our impulse to anthropomorphize, perhaps we can harness it—using LLMs as cognitive prosthetics, as tools for creative ideation, or as platforms for exploring ethical dilemmas in a safely simulated environment. The challenge, I think, is to cultivate a cultural literacy that enables us to distinguish between genuine agency and its digital simulacra, thereby turning a potential source of confusion into a lever for collective benefit.

And first of these is Henry’s favorite: Gopnikism. It focuses on how GPT LLMs are going to enable us to deal the Sisyphean challenge of information overload in an era where the sheer volume of “good work” is not only overwhelming but often submerged beneath mountains of algorithmically generated dross. The proliferation of digital content, much of it low-value “AI-slop,” means that traditional methods of discovery and synthesis—reading, annotating, curating—are no longer sufficient. What is required, and what LLMs may offer, are new “information-access superpowers”: tools for summarization that distill the essential from the trivial, representation systems that map the structure of knowledge, and tree-structure generation that allows for the intuitive navigation of complex subjects.

Dialogue, too, becomes a mechanism for co-constructing understanding, as users query, clarify, and iterate with machine interlocutors. Consider the evolution from the Dewey Decimal System to Google Search, and now to interactive AI agents capable of generating annotated bibliographies, extracting key arguments from sprawling literatures, and even proposing novel connections across disciplines. The hope—I guess—is that these advances will empower individuals and institutions to reclaim agency in the knowledge economy, transforming the chaos of information overload into an opportunity for deeper insight and more effective action.

LLMs are computational marvels of cultural technologies that are going to reshape how we learn, create, and remember. Henry has made a very good start, I think, at setting out a taxonomy of how to make progress in thinking about this. But surely this is not complete? Surely there is more?

Leave a comment

Subscribe now

If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail…

#notes-henry-farrell-on-llms-as-cultural-technologies
#mamlms
#gopnikism