The Iron InfoTech Law of Master & Servant: The Biggest Threat from MAMLMs Is Not Malevolent Machines, But Our Own Accidental Self-Pwnage
The real risks from “AI” we need to deal with NOW are how they are already hacking our brains to turn us into zombie cognitive slaves of people and systems that do not wish us well at all—with no Artificial Superi-Intelligence required. If you think capitalism is the “final boss” constraining humanity, you haven’t met the emergent properties of networked Kahneman System I stupidity. It is our own inability to filter, focus, and resist manipulation and brain-hacking that we need to most fear right now…
I am on the side of the belief that there is enormous user surplus from the coming of MAMLMs because:
platform tech oligopolists will give the models away for free or nearly free because they are and will remain desperate not to be disrupted by the next generation,
and yet gullible VCs will continue to fund startups and keep the pressure on in the almost certainly vain hope that one of their gambling chips will wind up placed on the “winner take all” square,
so, predominantly, the immense value from natural-language interfaces to structured and unstructured benefits will flow down the production network as use-value and not be captured within the production network as exchange-value,
plus there will be some big value hits for the very big-data, very high=dimension, very flexible-function classification, prediction, and estimation capabilities that MAMLMs running on GPUs enable.
And yet, and yet, and yet…
This tremendous potential jump in human machine-assisted cognition for human benefit and flourishing materializes, of course, only to the extent that these MAMLMs become our servants rather than our masters.
They need to be our tools to help us think smarter.
We need to avoid having them become the masters that hack our brains to our detriment. That might happen in several ways:
Perhaps the malevolent will seek to hack our brains for their power or direct profit.
Perhaps the sociopathic will seek to hack our brains because scaring us and glueing our attention and our eyes to screens is a way for them to extract a eye-dropper’s worth of money from each of us by selling our eyeballs to advertisers.
Perhaps the bias toward creating MAMLMs that speak with assurance and conviction that makes them very persuasive will lead us to hack our own brains into idiocracy, because we are too stupid to be allowed to get into the bathtub given that we use Kahneman’s System I when we should be using System II, and thus metaphorically slip on the soap and crack our heads.
No: Travis Kalanick is not going to uncover the Mysteries of the Universe and discover New Physics via chatting about quantum mechanics with Elon Musk’s Grok-AI. But he may well wind up turning himself into a drooling zombie-slave to MechaHitler:
Sifu Tweety Fish: <https://bsky.app/profile/sifu.tweety.fish/post/3lu3le3h7pc2o>: ‘time was you’d have to endow a whole institute to get to be a physics crackpot just because you’re rich and dumb:
Paul Waldmann: <https://bsky.app/profile/paulwaldman.bsky.social/post/3lu2bc223tc2e>: ‘My god these guys are such spectacular morons:
Matt Novak: Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries <https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060>: ‘The latest episode of the All-In podcast…. Travis Kalanick…discussed how he uses xAI’s Grok, which went haywire last week, praising Adolf Hitler and advocating for a second Holocaust against Jews. “I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics. And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that…. I pinged Elon on at some point. I’m just like, dude, if I’m doing this and I’m super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?… And this is pre-Grok 4…” Kalanick said…. “It’s like pulling a donkey…. It doesn’t want to break conventional wisdom…. You’re pulling it out and then eventually goes, oh, shit, you got something…” Kalanick said….
Palihapitiya went a step further…. “When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out…”.
[Elon] Musk… suggested “general artificial intelligence” was close because he had asked Grok “about materials science that are not in any books or on the Internet…”.
Meta CEO Mark Zuckerberg announced Monday that his company was building enormous new data centers to work on superintelligence. “Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher,” Zuck wrote. “I’m looking forward to working with the top researchers to advance the frontier!…”
I mean, rarely has so much self-pwnage been compressed into such a small space. One would think that such a self-pwnage concentration would collapse into a self-pwnage singularity, and cause a rip in the fabric of rthe cosmos itself.
That said:
This is a powerful piece of information on the side that the AI-Risks-Near-Term needs a lot more attention. Ignore fears about hypothetical superintelligent machines that might someday pose existential threats to humanity. We have many near-term fish to fry here and now. Social-media dynamics have already influenced elections, incited violence, and deepened social divisions. MAMLM dynamics pose at least an equal threat.
The bottleneck for MAMLM adoption in a way that could be called “successful”—a way that social-media adoption has not—requires the psychological, managerial, and institutional imagination to slot them into proper places in the distributed cognitive network that is the ASIHCM: the Anthology Superintelligence of the Human Collective Mind. In the case of MAMLMs, the challenge is not simply to deploy the technology, but to reimagine workflows, roles, and even the culture of institutions so that these tools augment human collective super-intelligence rather than substitute for it or undermine it. We do not need to build superintelligence. We need, rather, to figure out how to make our technologies boost rather than subtract from the superintelligence we have that we have been building since the invention of writing.
For MAMLMs to be “successfully” adopted, leaders must cultivate the psychological insight to understand how people interact with these systems, the institutional flexibility to experiment with new forms of collaboration, and managerial imagination to rethink job design and much else. The challenge is not that machines are so smart and so malevolent, but that humans are so gullible and easily manipulated—even when there is no conscious mind on the other side of the exchange trying to do the manipulation, but it is just an emergent property of the system.
It might even be comforting if the real risks and dangers of AI lay in the emergence of some sentient, malevolent machine intelligence. The more prosaic—and insidious—truth is that even “dumb” algorithms, simply optimizing for engagement, clicks, or profit, can produce outcomes that are profoundly manipulative. No human sat in a room and plotted to turn many YouTube viewers into conspiracy theorists; rather, millions of micro-optimizations, guided by feedback loops, led to an emergent pattern that exploited cognitive biases and vulnerabilities.
The “Silicon Law of Attention Conservation” is important here. As technology makes it easier to produce “content”—be it essays, videos, or social media posts—the volume of information explodes, but our capacity to pay attention remains stubbornly finite. In the pre-digital era, gatekeepers like editors, publishers, and teachers played a crucial role in filtering information and curating quality. Today, with the democratization of content creation, everyone is a publisher, and the deluge of information threatens to overwhelm our ability to discern what is valuable, true, or relevant.
The result is a paradox: even as access to knowledge expands, genuine understanding and insight become harder to attain. Algorithms that promise to help us filter and prioritize information are themselves susceptible to manipulation, bias, and optimization for profit rather than truth. Thus tools for proper attention and filtering become more important and much more valuable than ever. And yet this appears to be something we are quite bad at.
Developing the skills of critical reading, skepticism, and information triage is more important than ever. Yet these are precisely the skills that our educational and social systems have failed to construct at anything like the scale we need.
Thus the true revolutionary advance in human flourishing from the coming of MAMLMs may never occur: it may be a net minus. And, if it does occur, itwill be slow, uneven, and dependent on deep organizational, workflow, and feedflow changes. It will require much patience and institutional experimentation.
The hype cycle promises rapid, dramatic gains in productivity and creativity, the reality is that organizations must experiment—often through trial and error—with new ways of integrating these tools. This means reengineering information, in the knowledge-space, in the entertainment-space, and in the work-coördination space. Some sectors will adapt quickly; others will lag. The unevenness of this diffusion will create winners and losers, financial and non-financial, economic and non-economic. For individuals, adaptability, curiosity, and a willingness to engage with new technologies—not just as users, but as codesigners—will be crucial to thriving.
But do not think that this “final boss” we face—and that Calacanis, Palihapitiya, Musk, Kalanick, and it now looks like Zuckerberg have permanently succumbed to—is anything simple like “capitalism”.
The “final boss” is not or is not just the profit motive, but the entire architecture of modern society: bureaucracies, regulatory regimes, social norms, and cultural practices that shape how technologies are built, adopted, and used. Surveillance, manipulation, and exploitation are not the exclusive preserve of corporations; states and other institutions are equally capable of, consciously or unconsciously harnessing AI for their own ends and purposes, with “purpose” being either what some system was designed by some humans to do, or “purpose” in that the Purpose of a System is What It Does. Network effects, feedback loops, and emergent phenomena mean that even well-intentioned actors can become complicit in systems that produce harmful outcomes.
Success will require a very broad-based reimagining of how society governs, integrates, and responds to information-technological power and capability
References:
Collier, Angela. 2024. “Billionaires Want You to Know They Could Have Done Physics.” YouTube. https://www.youtube.com/watch?v=GmJI6qIqURA.
DeLong, J. Bradford. 2025. “What Is Man That Thou Art Mindful of Him?: How We All Already Have Our Superintelligent AI-Assistant.” Substack. https://braddelong.substack.com/p/what-is-man-that-thou-art-mindful
Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. https://archive.org/details/thinkingfastslow0000kahn.
LiterallyAustin. 2025. “OWNED (PWNED”. Know Your Meme. https://knowyourmeme.com/memes/owned-pwned
Novak, Matt. 2025. “Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries.” Gizmodo. https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060.
Palihapitiya, Chamath. 2025. “All-In Podcast: AI, Quantum Physics, and Vibe Coding.” All-In Podcast. hhttps://podcasts.apple.com/us/podcast/grok-4-wows-the-bitter-lesson-elons-third-party-ai/id1502871393?i=1000716888671.
Paul Waldman. 2025. “My god…” Bluesky. https://bsky.app/profile/paulwaldman.bsky.social/post/3lu2bc223tc2e.
Shojaee, Parshin, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, & Mehrdad Farajtabar. 2025. “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” arXiv:2506.06941. https://arxiv.org/abs/2506.06941.
Sifu Tweety Fish. 2025. “Time was…” Bluesky. https://bsky.app/profile/sifu.tweety.fish/post/3lu3le3h7pc2o.
Zuckerberg, Mark. 2025. “Meta Superintelligence Labs Will Have Industry-Leading Levels of Compute.” https://blocksandfiles.com/2025/07/15/zucks-super-massive-ai-data-centers-will-be-storage-gold-mines/#:~:text=Zuckerberg%20says%3A%20%E2%80%9CMeta%20Superintelligence%20Labs,researchers%20to%20advance%20the%20frontier!%E2%80%9D.