Taming Our Unaccountable Societal-Scale Machines: How Impossible Is It Going to Be?

“Management Cybernetics” may be the name of the only thing that can save us from our own systems for anthology-intelligence societal-scale coöperation among us East African Plains Apes. But can that discipline exist? It certainly is not really existing now…

Share

Share Brad DeLong’s Grasping Reality


Here is one of the many, many eggs I have been juggling—under variable gravity—that I have dropped on the floor in the past six months. It is my promise to myself revise and expand the already 5000-word review I wrote on my last birthday of Dan Davies’s superb little book The Unaccountability Machine <https://www.blackwells.co.uk/bookshop/product/The-Unaccountability-Machine-by-Dan-Davies/9781788169547>:

A taste of what I wrote then:


We have built a world of vast, interlocking systems that no one can fully understands. From corporate behemoths to government bureaucracies, these leviathan-like societal machines with human beings as their parts make decisions that shape our lives—often with disastrous consequences.

Can there be a way to tame these monsters of our own creation, to give them human faces?

Dan Davies thinks the forgotten discipline of “management cybernetics” might provide a way. That is the crux of his brand-new The Unaccountability Machine… [which provides] a much better road towards understanding our current societal-organizational environment than others currently being put forward….

Remember Henry Farrell’s setting the stage….

Human[s]… created a complex world of vast interlocking social and technological mechanisms… impervious to… understanding…. Our first instinct is to populate this incomprehensible new world with gods and demons…”. And then [having done so], Henry says, technoutopianbros or technodystopianbros divide. The utopian… “AI rationalists… [suppose] the purportedly superhuman entities they wish to understand and manipulate are intelligent, reasoning beings… [with] comprehensible goals… [that] might… be trammel[led] through… subtle grammaries of game theory and Bayesian reasoning…. You [may] call spirits from the vasty deep… [and] entrap them in equilibria where they must do what you demand they do…

By contrast, the dystopians believe we should… simply welcome our AI-overlords:

We are confronted by a world of rapid change that will not only defy our efforts to understand it…. We might as well embrace this…. The stars are right, and dark gods are walking backwards from the forthcoming Singularity to remake the past in their image. In one of [Nick] Land’s best known and weirdest quotes: “Machinic desire… rips up political cultures, deletes traditions, dissolves subjectivities, and hacks through security apparatuses, tracking a soulless tropism to zero control…. The history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself…. Digitocommodification is the index of a cyberpositively escalating technovirus, of the planetary technocapital singularity: a self-organizing insidious traumatism, virtually guiding the entire biological desiring-complex towards post-carbon replicator usurpation…” And this “technocapital singularity” is to be celebrated!…

From my view, of course, Henry Farrell is completely write to judge all this as simply crazy:

  • For one thing, none of these processes have human intentionality.

  • We are primed to attribute human intentionality to them for lots of reasons.

  • But actually believing that they do have human intentionality will lead us far astray.

If these are our other potential guides—and they are—Dan Davies is vastly to be preferred. In the final analysis, therefore, The Unaccountability Machine is our best guide, at least to thought as to how to take action.

Give a gift subscription


But not only have I failed to deliver, I now have to digest as well:

Programmable Mutter
Large AI models are cultural and social technologies
I’ve tried to use this newsletter to highlight ideas from various people, who are thinking about AI without getting stuck on AGI. These include Alison Gopnik’s argument that Large Language Models are best understood as “cultural technologies,” Cosma Shalizi’s…
Read more

My precis of what they say:

  • Large AI models should be viewed as cultural and social technologies—definitely not intelligent agents…

  • They are roughly of the same ilk as past information, communication, and coördination systems like pictures, writing, arithmetic, records, print, video, internet search, markets, bureaucracies, democracies, and ideologies

  • Thus we need to ignore fears of AGI and focus on the immediate societal effects of the coming of these cultural and social technologies…

  • Their ability to do very high-dimensional very big-data regression and classification with an attached natural-language front end makes them powerful tools to generate lossy summaries of massive human-generated data sets—thus transforming very complex assemblies of information into usable forms.

  • This is analogous to the market system’s summarization of a huge amount of information about production and demand into a single number: a price.

  • This is analogous to a democracy’s summarization of a huge amount of information about human collective-action goals and constraints into a single decision: a passed parliamentary motion

  • This is analogous to a bureaucracy’s summarization of a huge amount of information about past procedures, successes, and failures into a single gate-keeping decision as to whether. for a particular action, established prerequisites have been satisfied

  • This is analogous to an ideology’s summarization of a huge amount of information about the world and humanity’s place in it into a single simplified picture of leaders, goals, friends, and enemies…

  • Users engaging with these systems are not consulting another mind—an intelligent oracle—any more than the market system that directed the boule and demos of the Athenai in the year -456 to source the ten tons of tin for Phidias’s statue of Athene Promakhis from Cornwall was an intelligent being…

  • Neverthelss, the effects of the introduction of these MAMLMs on society may well rhyme with those of previous transformative cultural-social organizational technologies like movable-type print in the Enlightenment

  • The consequences will be the spread of misinformation, bias, and cultural homogenization in some spheres alongside new forms of cultural diversity and immense creativity in others…

  • There use will uncover non-obvious patterns across human knowledge, generating new avenues for scientific exploration and engineering progress…

  • They will alter economic power dynamics, as the tension between information producers and distributors will intensify…

  • The narrative of “AI” as superintelligent agents blocks coherent thought about the immediate and real social and economic opportunities and challenges…

What do I think about this? I do not know yet. I do know that I would like to visit an alternative timeline in which Herbert Simon’s Sciences of the Artificial had become a multidiscipline- and multidepartment-sparking book.


References:


Programmable Mutter
Cybernetics is the science of the polycrisis
One of the most interesting ‘might have been’ moments in intellectual history happened in the early 1970s, when Brian Eno traipsed to a dingy cottage in Wales to pay homage to Stafford Beer. Eno had written a fan letter to Beer after reading his book on management cybernetics…
Read more
Programmable Mutter
Vico’s Singularity
Vernor Vinge died some weeks ago. He wrote two very good science fiction novels, A Fire Upon the Deep and A Deepness in the Sky (the other books vary imo from ‘just fine’ to ‘yikes!’), but he will most likely be remembered for his arguments about the Singularity…
Read more

Leave a comment

Subscribe now

If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail…