A Return of "Management Cybernetics" as a Way Forward Out of Economics-Based Neoliberalism?
I signed up to write an 800-word review of Dan Davies’s brand-new The Unaccountability Machine. The problem is that what I now have is more than 5000 words. And the publication it was for has just bounced the 1400-word compressed version. So here is THE WHOLE CURRENT THING…
By reviving the ideas of cybernetics pioneer Stafford Beer, Davies suggests we can build organizations that are not just efficient, but truly accountable. In an age of AI anxiety and institutional mistrust, “The Unaccountability Machine” offers a timely reminder: the machines we fear most are the ones we’ve already built.
We have built a world of vast, interlocking systems that no one can fully understands. From corporate behemoths to government bureaucracies, these leviathan-like societal machines with human beings as their parts make decisions that shape our lives—often with disastrous consequences. Can there be a way to tame these monsters of our own creation, to give them human faces? Dan Davies thinks the forgotten discipline of “management cybernetics” might provide a way. That is the crux of his brand-new The Unaccountability Machine. Our societal woes stem not from individual failings, but from the opaque workings of large-scale decision-making structures—hence the need for better system design, better feedback loops, and more and better chosen variety of information, state, and action in these machines’ control mechanisms. Cybernetics was the discipline to help us understand communication and control in complex systems. The steersmen all ran aground, But we can try again.
Understanding how to make our systems more accountable and more human may not be the key to our survival, but it is certainly the key to our happiness and prosperity.
Dan Davies’s The Unaccountability Machine: Why Big Systems Make Terrible Decisions & How the World Lost Its Mind is a little book, and is a great book.
How is it a little book? Damned if I know. Had I set out to write anything like book, I could not have done so in less than four times the length.
Why is it a great book?
It is a great book because it sheds light on one of the most pressing issues of our time: why big systems make terrible decisions. Through a blend of historical analysis, theoretical insight, and contemporary case studies, Davies provides an essential read for those looking to understand the complexities of modern decision-making, and the pervasive issue of unaccountability and dysfunction in large human organizational systems.
It is a great book because it is a book that takes a lot of very important and fuzzy ideas about how a world of more than 8 billion people tightly linked together by economic commodity exchange, lightspeed voice, and political control can somehow organize itself to be productive, peaceful, and free when there is no way anything in our evolutionary past could possibly have predisposed us to pull and think together at such a scale. The result is inherent complexities and lack of transparency that lead to catastrophic decision-making. Davies makes this case through intricately woven combinattions of historical analysis, theoretical frameworks, and contemporary examples that he uses to critique the phenomenon of ‘unaccountability’ that plagues modern society, from corporate behemoths to governmental institutions.
So what do we do? Davies says the first step is for him to write his book, attempting to revive what was once an important intellectual movement of the post-World War II world, cybernetics—Norbert Weiner’s idea that there should be principles that we can discover about how to make our increasingly large and complex systems of human organization comprehensible, and manageable by human beings. The root is the Greek kybernētikos, meaning “good at steering a boat”. Cybernetics would have been a discipline, metaphorically, about how to steer a boat, or perhaps about how to build a boat that can be steered.
For at the heart of Davies’s argument is the concept that historical events and societal shifts are better understood through the prism of decisions rather than the events themselves. He posits that many of these decisions emanate not from the will of individuals but from the impersonal, often opaque workings of large systems. This perspective challenges traditional notions of accountability and decision-making, suggesting a world increasingly governed by what Davies terms ‘accountability sinks’—structures within organizations that deflect responsibility, thereby diluting individual accountability. Thus not events, but rather cybernetic information-flow structures, are at the center of at least modern history.
Davies’s argument is thus an optimistic one—that we can understand what appear to be the opaque workings of large systems because if there is not yet we can build a functioning intellectual discipline to help us manage them. Davies writes as if this conundrum we face is a product of the post-WWII history of the so-called “managerial revolution”.
But Henry Farrell argues that it is in fact much older than that. And I agree with him. Davies has chosen what Farrell calls the “Vico” as opposed to “Kafka” prong of the fork:
Vico-via-Crowley and Kafka-via-Jarrell present the two prongs of a vaster dilemma. Over the last few centuries, human beings have created a complex world of vast interlocking social and technological mechanisms. Can they grasp the totality of what they have created? Or alternatively, are they fated to subsist in the toils of great machineries that they have collectively created but that they cannot understand?… Understood in this light, current debates about the Singularity are so many footnotes to the enormous volumes of perplexity generated by the Industrial Revolution: the new powers that this centuries-long transformation has given rise to, and the seething convolutions that those powers generate in their wake…
What is this rope he tries to revive? I see five pieces spun together:
(1) revive the influence and reputation of counterculture-era management cyberneticist Stafford Beer. In his view, the principal governmental problem in managing corporate bureaucracies’ problems is a matter not for economists who focus on eliminating market failures, but of rather supervising them in a way that ensures that the internal flow of information between deciders and decided-upon is kept in balance so that they become and remain viable systems that are useful to humanity.
(2) Our current world is beset by accountability sinks—places where things are clearly going wrong, but it is nobody’s fault. As Felix Martin puts it:
frustrated customers endlessly on hold to ‘computer says no’ service departments… banking crises regularly recur—yet few individual bankers are found at fault… politicians’ promises flop[ping and] they complain they have no power; the Deep State is somehow to blame…
(3) Every organization needs to do five things: operations, regulation, integration, intelligence and philosophy. Operations is doing the work; regulation is making sure the people doing the work have what they need when they need it; integration is making sure people are pulling in the right direction; intelligence is planning so when things happen you can modify operations, regulation, and integration; philosophy is what you are doing all of this for. Davies writes:
Think of soldiers, quartermasters, battlefield commander, reconnaissance and field marshal, or… musicians, conductor, tour manager, artistic director and Elton John…
(4) Sometimes you need to get all the things done by simplifying-and-optimizing: delegate some of the work to a suborganization that will report its metrics, and as long as it meets its metrics don’t worry about it—but when it fails to meet its metrics, take a careful look inside at what is going on. This is attenuation: somehow reduce the complexities the organization has to keep trying to deal with so that there is less to do. (But do this badly and you are just pretending things are less complicated than they are.)
Davies’s example is how you would regulate the temperature of a squirrel’s cage. Rather than record its temperature at every minute, and then think about what would be the best thing to do, simply install an automatic thermostat and set the target temperature:
Variety engineering for beginners: The ambient temperature of our squirrel cage could take practically any value (within a realistic range). If we make a decision to reduce our information set to ‘too hot’ and ‘too cold’, we can match it to a regulator with states of ‘heater on’ and ‘heater off’; we’ve built a thermostat. Doing this isn’t difficult—we just decide to throw away some of the information, on the assumption that it’s not relevant. That might end up being a bad decision, of course (if the ambient temperature rises above 100 degrees, for example, perhaps because the lab is on fire), but there is a huge saving in the amount of variety and information that we need in the regulator. This sort of decision is fundamental to the cybernetic analysis of systems; you are always attenuating variety in some way or other, unless you are describing a system that consists of everything in the universe…
(5) Most of the time what you really need to get all the things done is to build better feedback loops, which requires amplification so that the control mechanism can see what is really there outside. The organization needs to better match in its internal structures the complexity of the environment it is dealing with, so that it sees what it needs to see in time for something to be done about it before it is too late.
We have not learned these management-cybernetic lessons, we do not think of our systems in this way. And so lots of things have gone and do go and will continue to go wrong. Davies’s tracing of the cybernetic flaws in the implementation of the ‘managerial revolution’ provides a historical foundation for understanding the current state of unaccountability. Critical decisions increasingly made by systems and processes rather than by individuals with a stake. This shift, Davies argues, has not only made it difficult to pinpoint responsibility for poor decisions but has also alienated individuals from the very systems that govern their lives. A significant strength of the book lies in its ability to connect these abstract concepts to tangible, real-world consequences. Davies leverages a range of examples, from the tragicomic episode of squirrels at Schiphol Airport to the profound societal impacts of the 2008 financial crisis. These illustrations serve not only to elucidate his points but also to demonstrate the far-reaching implications of systemic unaccountability on both a micro and macro scale.
Note that nowhere in this management cybernetics is a primary task one of making sure that people have the right incentives to act on the information they have (that elimination of “market failures” is the focus of economics. It is, rather, making sure that the flow of information is not neurotic—neither too little for those who must decide to grasp the situation, too much so that those who must inside drown, or too irrelevant. I wish I could say: “It’s a kind of psychoanalysis for non-human intelligences, with [counterculture-era management cyberneticist] Stafford Beer as Sigmund Freud”. But I cannot. Felix Martin wrote that in his Financial Times review of The Unaccountability Machine. And since I cannot do better, I unabashedly steal it.
I need to stress here that economists’ advice and counsel is, in Davies’s view, worse than useless in solving these problems. Even on their home stomping grounds—in this case, on the functioning of international capital markets—economists’ insistence that market prices and profit-seeking economic agents guided by them goes abysmally wrong. In the 1980s British Chancellor of the Exchequer Nigel Lawson argued that since Britain’s foreign-exchange current-account deficit was generated by a free market, it must be good. But it was not good at all, Davies argued on his weblog. And it was not good at all for essentially cybernetic reasons that standard economics could not see:
Where did it all go wrong?Lots of places, of course, in different ways and at different times. But the British economy… from about 1986 to 1992…. Among a certain kind of economist of a certain age… Nigel Lawson is famous for is the “Lawson Doctrine”…. Current account deficits… [that] have arisen as a result of private sector firms’ and households’ investment and spending decisions… are… “benign and self-correcting”…. A potted summary of the end of the Lawson Boom (and of Conservative Party economic credibility for two decades after the Black Wednesday currency crisis) would be “it wasn’t”….
The point of the Hayekian economy as information processor is that whenever a private sector transaction happens, information is transmitted through the supply and demand mechanism, and then incorporated into the price mechanism. If you presume rational expectations, then this mechanism ought to be self-regulating; the price ought to adjust to bring the transactions into equilibrium, so that it shouldn’t be possible to build up deficits which have really bad consequences…. [Yet] the thing that happened is… the good kind of deficit… that should self-correct, didn’t. How come?…
Tak[e] seriously the Hayekian model of the market economy as an information processor. If you do that, the answer is pretty trivial—the self-regulatory mechanism didn’t have enough information processing capability to regulate this problem quickly enough…. It didn’t have enough bandwidth…
If von Hayek’s arguments that the market-economy price mechanism can and does have enough information-transmission bandwidth were correct, Black Wednesday would not have happened, because the Lawson Doctrine would have been sound. It wasn’t sound. It did happen. It is not correct.
Davies goes on:
Economists don’t… think this way… because the economic debate about information and calculation was won in the 1920s with the “socialist calculation problem”, while the modern theory of information wasn’t invented until Shannon and Wiener… in the 1940s. “Long and variable lags” could and should have been translated into much more rigorous terms, decades ago. But we had other things on our minds…
This worse-than-uselessness is especially baneful, Davies thinks, as far as the large-corporation sector of the economy is concerned. Davies appears especially angry at what he sees as the intellectual wasteland and on-the-ground rubble left by Milton Friedman, and his shareholder value-maximization doctrine. It took a post-WWII oligopolistic-company system that was in rough, effective, and useful cybernetic balance—what John Kenneth Galbraith called the “technostructure”—and destabilized it by turning its components into harmful paperclip short-term profit maximizers.
He eloquently warns of us economists’ potentially baneful influence:
As long as the ideology of economics maintains its dominant position, there is always a considerable danger of the Friedman doctrine rising back up from the dead. If the highest-level decision-making mechanisms of the world are to be viable systems, they need a philosophy which can balance present against future and create self-identity…. This philosophy cannot look much like mainstream economics…. Any system which is set up to maximise a single objective has the potential to go bonkers…. You can’t have the economists in charge, not in the way they currently are…. The top level of any decision-making system that’s meant to operate autonomously can’t be a maximiser. And so, the governing philosophy of the overall economic system can’t be based on the constrained optimisation methodology that’s currently dominant in the subject of economics…
And he has much else to say.
Still, is this more than mere handwaving? I think not quite, but almost: I hope this book will spur the thinking that we actually need to do, for we badly need a revival of the intellectual thread of cybernetics.
Why do we badly need it? Let me back up, and approach that question the long way around:
We East African Plains Apes are neither wise, nor smart. We are lucky if we can remember where we left our keys last night.
And yet, working together, we have conquered and dominate the world. are an awe-inspiring concept-thinking and nature-manipulating anthology intelligence, whose spatial reach embraces the globe, whose numerical reach now covers more then 8 billion of us, and whose temporal reach—because of writing—now extends back 5000 years. Even as long ago as 150,000 years ago, weak of tooth and absence of claw as we are, and when our ability to coöperate to work and think together was limited to a band of perhaps 100 with a memory that extended back only 60 years or so, we were not just being eaten by but we (or, rather, our very close Neandertal cousins) were also eating the hyenas.
But how can we work and think together at 8 billion-plus scale? We no longer just have our families, our neighbors, and our coworkers with whom we interact via networks of affection, dislike, communication, barter, exchange, small-scale plan, and arm-twisting.
Instead, or rather in addition, more and more of what we do is driven by an extremely complex assembly of vast interlocking social and technological mechanisms that we have made but that we do not understand. Dan Davies hopes to set forth an intellectual schema to help us understand them: that is the goal of management cybernetics.
The publisher’s website for the book says says:
Profile Books: The Unaccountability Machine (Hardback): ‘Part-biography, part-political thriller, The Unaccountability Machine is a rousing exposé of how management failures lead organisations to make catastrophic errors. “Entertaining, insightful … compelling” Financial Times…. Dan Davies examines why markets, institutions and even governments systematically generate outcomes that everyone involved claims not to want…. Stafford Beer… argued in the 1950s that we should regard organisations as artificial intelligences, capable of taking decisions that are distinct from the intentions of their members. Management cybernetics… was largely ignored…. With his signature blend of cynicism and journalistic rigour, Davies looks at what’s gone wrong, and what might have been, had the world listened to Stafford Beer when it had the chance… <https://profilebooks.com/work/the-unaccountability-machine/>
Groups of humans that achieve outcomes or “decisions that are distinct from the intentions of their members…”—now that leads me to reach for Adam Smith and his Wealth of Nations, in which the combination of profit and security motives on the part of merchants leads them to take actions that add up to making society as rich as possible by:
render[ing] the annual revenue of the society as great as he can…. [although] he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it…
For Adam Smith, the fact that the market economy considered as a slow-AI has a mind of its own is not a bug but a feature. Indeed, the fact that the market economy as slow-AI has this mind of its own allows Adam Smith to dynamite the entire “Political Œconomy” literature of how to make England great that The Wealth of Nations intervenes in.
Smith thus created the intellectual discipline of economics by building a toolkit for thinking about the global-scale societal human thinking- and doing-mechanism that is the coördinated market economy.
But the market economy was not alone as a human-made but inhuman-scale and incomprehensible societal mechanism. We have others. And all of our global-scale societal mechanisms are Janus-faced.
They are extraordinarily, massively, mind-bogglingly productive. How much richer am I than my Ohlone predecessors who were the only people then living on the shore of San Francisco Bay four hundred years ago? A hundredfold richer? More? And what do I do to gain these riches? I know things and tell people stories about the human economy of the past. That is what I do.
But these mechanisms are also horrifyingly alien, inhumanly cruel, and bizarrely incomprehensible. Franz Kafka saw this. As Randall Jarrell wrote: “Kafka says… the system of your damnation… your society and your universe, is simply beyond your understanding…” Purdue Pharmaceuticals “decides” that a good way to make money for its shareholders is to addict Americans to opiates, and the individual humans who are its components fall into line—and afterwards all protest that that was not what they meant to do. But they did it. Global warming means that Berkeley right now has the climate that Santa Barbara 300 miles south had in my youth. Who decided to do this?
And I have not gotten to the fact that this is the timeline with the killer robots and the automated distributed propaganda machines that would make O’Brien of 1984 or Gletkin of Darkness at Noon laugh with joy.
New York Times columnist Ezra Klein says that in trying to understand the latest wave of cultural technologies that are the tech sectors MAMLMs—Modern Advanced Machine-Learning Models—he is driven to:
metaphors… [from] fantasy novels and occult texts… act[s] of summoning… strings of letters and numbers… uttered or executed… create some kind of entity… [to] stumble through the portal…. Their call… might summon demons and they’re calling anyway…
But what Ezra does not appear to recognize is that his metaphors of finding ourselves in a room with possibly malevolent THINGS that have escaped confining pentacles applies not just to programs running on NVIDIA-designed GPUs. Mary Shelley saw that it applied to science, Marx to the market economy, Kafka to bureaucracy. Adorno to the creation and transmission of culture, Marcuse to modern democracy, and so on. Can we understand and manage these inhuman massive-scale systems that are in the last analysis made up of people doing things for reasons? Can we control or constrain them to give them humanlike souls, or a human face?
So far the answer has been, largely, no. Consider what Gabriel Garcia-Marquez thought of was extremely high and definitely worshipful praise of Cuba’s Maximum Leader Fidel Castro:
He has breakfast with… two hundred pages of news…. No one can explain how he has the time or what method he employs to read so much and so fast…. A vast bureaucratic incompetence affecting almost every realm of daily life, especially domestic happiness, which has forced Fidel Castro himself, almost thirty years after victory, to involve himself personally in such extraordinary matters as how bread is made and the distribution of beer…
To which Jacobo Timerman snarked:
Castro… has a secret method… for reading quickly…. Yet, thirty years after the revolution, he hasn’t managed to organize a system for baking bread and distributing beer…
From a cybernetic perspective, most of our economic world today suffers from the inverse of the flaws of Fidel Castro’s system. We are under the dominion of sophisters, calculators, and most of all economists. So we have systems that are highly efficient at managing the wrong things in the wrong way. They are maximizers, where the goal is to make as much money as possible. As Davies writes:
A maximising system… defin[es] an objective function, and throw[s] away all the other information…. [But] the environment is going to change, and something which isn’t in the information set any more is going to lead… [to] destruction…. Every decision-making system set up as a maximiser needs to have a higher-level system watching over it. There needs to be a red handle to pull, a way for the decided-upon to indicate intolerability
But all is not lost, at least not with respect to the major shoggoths of our economy. This is what I see as Davies’s major action-item conclusion:
[In] the decision-making system of a modern corporation… one of its signals has been so amplified that it drowns out the others. The ‘profit motive’ isn’t…. Corporations… don’t have motives. What they have is an imbalance…. They aren’t capable of responding to signals from the long-term planning and intelligence function, because the short-term planning function has to operate under the constraints of the financial market disciplinary system…. Take away that pressure [and] it’s quite likely…corporate decision-making systems will be less hostile…. Viable systems fundamentally seek stability, not maximisation…. On any given day, managers spend a lot more time talking to their customers and employees than they do to investors; if they were able to pay attention to what they heard, that would be much healthier…
The von Hayekian strand of economics—which is still the dominant strand—assumed that the tasks of managing society were primarily that of moving decision-making to where the information already was—that was the function of private property—that of signaling where addition resources were needed—that was contract—and that of incentivizing those with the resources, authority, and information to do their job—and that was the profit motive. Actual information flows of any more bandwidth than “this is the price” were seen as unnecessary.
But that has to be wrong. Management cybernetics offers a possible way to think about what would be right.
But economics is not the only intellectual discipline and the economy not the only global-scale societal-organizational mechanism that could not become more accurate and relevant with a dose of management cybernetics. Consider the issue of choosing “targets”, and then rewarding people for meeting them:
Targets… ought to target the thing that you care about, not something which you believe to be related to it, no matter how much easier that intermediate thing is to measure. That doesn’t guarantee success; the phenomenon of “gaming the system” or the tendency of control systems to be undermined by adversarial activity is much more general and complicated than this single problem. Targets are… an information reducing filter on the system…. Attenuating information to, literally. “make it manageable” is the whole [point]… That’s the fundamental reason why they sometimes go wrong…. As far as I can see, “teaching to the test” is a one hundred and eighty degrees inverted description of a phenomenon that ought to be called “not testing for the outcomes you want”…
And the tasks of management cybernetics will never end:
Complexity is constantly increasing…. Reorganisation is the way in which environmental variety is brought back into balance with the capacity to manage it…. The highest function of the Viable System Model is that of balancing the need to change with the capacity to change. It’s necessary to respect the problem, make a realistic assessment of what capacity exists or can be gathered, then think in terms of priorities and trade-offs to meet the most vital and immediate changes that need to be made. Systems have to be designed and redesigned, so that they obey the basic principles of “variety engineering” (the management science of ensuring that information arrives in the right place, in time and in a form in which it can be the basis of decision making)…
Thus the book is one of great, if Sisyphean, hope. We can fix our excessive dependence on unaccountable inhuman-scale systems. It is a problem of information flow: greater transparency, human oversight, and reintroducing personal responsibility. But this requires conscious efforts to combat and fix the tendency towards unaccountability and system opacity and misoptimization.
The Unaccountability Machine is, I think, a much better road towards understanding our current societal-organizational environment than others currently being put forward. I found Henry Farrell’s remarks on two of these useful. First, Henry sets the stage as he sees it:
Vico-via-Crowley and Kafka-via-Jarrell present… two prongs of a… dilemma. Over the last few centuries, human[s]… have created a complex world of vast interlocking social and technological mechanisms. Can they grasp the totality of what they have created? Or alternatively, are they fated to subsist in the toils of great machineries that they have collectively created but that they cannot understand?… “AI”… makes it… impossible to ignore… that humans have created a world… impervious to… understanding…. As with the obdurate physical universe, our first instinct is to populate this incomprehensible new world with gods and demons…
And then, Henry says, technoutopianbros or technodystopianbros divide. The utopians are the:
AI rationalists… [who suppose] the purportedly superhuman entities they wish to understand and manipulate are intelligent, reasoning beings… [that] reason their way towards comprehensible goals. This might… allow you to trammel these entities, through the subtle grammaries of game theory and Bayesian reasoning. Not only may you call spirits from the vasty deep, but you may entrap them in equilibria where they must do what you demand they do…
This belief that the tools of the economist can successful bespell our societal-scale mechanisms is, Donald Davies would say, naïve and wrong. I think he is right.
Then there are the dystopians, who believe that we should abdicate all choice and direction and simply welcome our AI-overlords:
We are confronted by a world of rapid change that will not only defy our efforts to understand it: it will utterly shatter them. And we might as well embrace this, since it is coming, whether we want it to or not. At long last, the stars are right, and dark gods are walking backwards from the forthcoming Singularity to remake the past in their image. In one of Land’s best known and weirdest quotes:
“Machinic desire… rips up political cultures, deletes traditions, dissolves subjectivities, and hacks through security apparatuses, tracking a soulless tropism to zero control. This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself…. Digitocommodification is the index of a cyberpositively escalating technovirus, of the planetary technocapital singularity: a self-organizing insidious traumatism, virtually guiding the entire biological desiring-complex towards post-carbon replicator usurpation…”
And this “technocapital singularity” is to be celebrated! William Gibson describes a future “like a deranged experiment in social Darwinism, designed by a bored researcher who kept one thumb permanently on the fast-forward button”…
And this is simply crazy. For one thing, none of these processes have human intentionality. We are primed to attribute human intentionality to them for lots of reasons. But actually believing that they do have it will lead us far astray.
if these are our other potential guides, Dan Davies is vastly to be preferred.
In the final analysis, therefore, The Unaccountability Machine is a guide to action—or at least to thought to how to take action. It is a great book: a crucial addition to the discourse on governance, ethics, and the role of social-organizational as well as nature-manipulation technologies in society. It should be read by anyone concerned with the direction in which our world is headed, offering both a stark warning and a glimmer of hope for a more accountable future.
Moreover, once we economists recognize our subordinate role, there may still be a place for us not that far from the hearth:
It’s not as if the toolkit of optimisation needs to be thrown away completely. As we said before, if you have some inputs and you want some outputs, then you want to get the most outputs for your inputs, and that’s what economics is all about. Providing a governing ideology and philosophy isn’t the only thing that makes a science worth doing—John Maynard Keynes once said that economists could consider their discipline a success when they were regarded as useful and competent technicians, like dentists.
References:
Adorno, Theodore, & Max Horkheimer. 1947. Dialectic of Enlightenment. Amsterdam: Querido. <https://archive.org/details/dialecticofenlig0000hork>.
Beer, Stafford. 1972. The Brain of the Firm. London: Allen Lane. <https://archive.org/details/brain-of-the-firm-reclaimed-v-1>.
Crowley, John. 1987. Ægypt. New York: Bantam Books. <https://archive.org/details/isbn_9780553051940>.
Davies, Dan. 2024. The Unaccountability Machine: Why Big Systems Make Terrible Decisions and How the World Lost Its Mind. London: Profile Books. <https://www.blackwells.co.uk/bookshop/product/The-Unaccountability-Machine-by-Dan-Davies/9781788169547>.
Davies, Dan. 2024. “Sympathy for the Folk Devil”. Back of Mind. January 17. <https://backofmind.substack.com/p/sympathy-for-the-folk-devil>.
Davies, Dan. 2024. “Laws of Managerial Motion”. Back of Mind. November 22. <https://backofmind.substack.com/p/the-laws-of-managerial-motion>.
Davies, Dan. 2023. “Goodhart as Epistemologist”. Back of Mind. July 21. <https://backofmind.substack.com/p/goodhart-as-epistemologist>.
Davies, Dan. 2023. “The Lawson Doctrine”. Back of Mind. April 21. <https://backofmind.substack.com/p/the-lawson-doctrine>.
DeLong, J. Bradford. 2024. “Large-Scale Transcontinental Coöperation in the Classical-Age East-African Plains Ape”. Grasping Reality. April 8. <https://braddelong.substack.com/p/large-scale-transcontinental-societal>.
DeLong, J. Bradford. 2024. “Really-Existing Socialism”. Grasping Reality. April 8. <https://substack.com/@delongonsubstack/note/c-53454807>.
Farrell, Henry. 2024. “Vico’s Singularity”. Programmable Mutter. May 1. <https://www.programmablemutter.com/p/vicos-singularity>.
Farrell, Henry. 2024. “Cybernetics Is the Science of the Polycrisis”. Programmable Mutter. April 17. <https://www.programmablemutter.com/p/cybernetics-is-the-science-of-the>.
Farrell, Henry. 2010. “The Goggles Do Nothing”. Crooked Timber. December 8, 2010. <https://crookedtimber.org/2010/12/08/the-goggles-do-nothing/>.
Jarrell, Randall. 1941. “Review: Kafka’s Tragi-Comedy”. Kenyon Review. 3:1 (Winter), pp. 116-119. <https://www.jstor.org/stable/4332231>. Cited in: Farrell, Henry. 2024. “Vico’s Singularity”. Programmable Mutter. May 1.
Herodotos of Hallicarnassos. [-425] 1904. Histories. Trans. Henry Cary. New York: D. Appleton and Co. <https://archive.org/details/historiesofherod00her>.
Klein, Ezra, & Erik Davis. 2023. “Ezra Klein Interviews Erik Davis.” New York Times. May 2. <https://www.nytimes.com/2023/05/02/opinion/ezra-klein-podcast-erik-davis.html>.
Koestler, Arthur. 1941. Darkness at Noon. New York: The Macmillan Company. <https://archive.org/details/darknessatnoon0000arth>.
Marcuse, Herbert. 1964. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. <https://archive.org/details/marcuse-herbert-one-dimensional-man-1964_202012>.
Martin, Felix. 2024. “The Unaccountability Machine—why do big systems make bad decisions?” Financial Times, April 4. <https://www.ft.com/content/0bb1b48f-b85a-4596-a0da-ac819bc69647>.
Marx, Karl. [1849] 1976. Wage Labor & Capital. New York : International Publishers, <https://archive.org/details/wagelabourcapit000marx>.
O’Hanlon, Larry. 2010. “Ancient Humans May Have Dined on Hyenas.” NBC News, June 9. <http://www.nbcnews.com/id/wbna37784102>.
Orwell, George. 1949. Nineteen Eighty-Four. New York: Milestone Editions. <https://archive.org/details/nineteeneightyfo0000orwe_q5v1>.
Profile Books. 2024. “The Unaccountability Machine”. <https://profilebooks.com/work/the-unaccountability-machine/>.
Shelley, Mary Wollstonecraft. . Frankenstein, or The Modern Prometheus. London: Lackington, Hughes, Harding, Mavor & Jones. <https://archive.org/details/frankenstein1818_202012>.
Smith, Adam. 1776. An Inquiry into the Nature & Causes of the Wealth of Nations. London: W. Strahan & T. Cadell. <https://archive.org/details/inquiryintonatur01smit_0>.
Timerman, Jacobo. 1990. “Reflections: A Summer in the Revolution—1987”. New Yorker. 66:26 (August 13), p. 62 ff. <https://archives.newyorker.com/newyorker/1990-08-13/flipbook/062/>.
Vico, Giambattista. 1744. The New Science. Trans. Thomas Goddard Bergin & Max Harold Fisch. Ithaca, N.Y.: Cornell University Press. https://archive.org/details/newscienceofgiam0000vico
Wiener, Norbert. 1948. Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press. Accessed June 3, 2024. <https://archive.org/details/cyberneticsorcon0000wien>.