DRAFT: Notes: What Is the Techno-Optimist Slant on “AI”?

@Ferry Building, SF :: 2024-03-14 Th 18:00 PDT :: Reinvent Future :: Invite: “The time has come to gather the A team in the San Francisco Bay Area and make the case for techno-optimism around AI… to clearly lay out the many positive ways that Artificial Intelligence could accelerate progress… the new ways forward that can make the most of this potentially world-changing new tool…counter the many negative narratives… building…. We want to do our part to meet those needs and help everyone better understand that there are positive ways forward that could unlock the full potential of AI, and responsibly deal with the risks. We want to make the case for techno-optimism in this critical year—and so we need you…”

Subscribe now


Share

In 2018 McKinsey Global Institute took its stab at the question:

Jacques Bughin & al.: Notes from the AI Frontier: ’At the global average level of adoption and absorption implied by our simulation, AI has the potential to deliver additional global economic activity of around $13 trillion by 2030, or about 16 percent higher cumulative GDP compared with today. This amounts to 1.2 percent additional GDP growth per year. If delivered, this impact would compare well with that of other general-purpose technologies through history…

Share Brad DeLong’s Grasping Reality

We are now halfway through. How is this forecast doing?

“AI”—parenthetically, I do not like that acronym, or the phrase underlying it. It’s MAMLM—Modern Advanced Machine-Learning Models. It’s GPTM—General-Purpose Transformer models as a subclass of of MAMLM. It’s LLM—Large Language Models as a subclass of GPTM. “AI” leads to an awful lot of fuzzy thought.

It is true, yes, that up until now the most prominent financially rewarding use case for the technologies now called “AI” has been use it to get your mitts on venture capitalist investor money. 

It is true, yes, that up until now the second most prominent financial rewarding use case for the technologies now called AI has been to ride Nvidia’s market valuation up and up and up to its current $2.25 trillion. Remember that as of mid 2022, the cryptogrifters were barfing up their GPUs onto the resale market. It was then very unclear whether Nvidia would actually want to use the fab space and time it had reserved.

NVIDIA went public on January 22, 1999, with a then-market capitalization of $600 million. A decade ago, in March 2014, its market capitalization was $10 billion. GPUs turned out to be very useful indeed for drawing graphics, and then building video, and then for providing proofs of work. But relying on crypto was risky: market cap nearly halved from $150 to $80 billion during the crypto winter of 2018–9. But it was rescued by growing gaming and then a renewed cryto boom. But as of mid 2022, the cryptogrifters were barfing up their GPUs onto the resale market. It was very unclear whether Nvidia would actually want to use the fab space and time it had reserved, and market cap was at $350 billion—still an amazing run since 1999, as NVIDIA took the dreams that massive parallelism was the future of computing and made them reality. But a run that looked near its end. 

And then came ChatGPT, and CUDA turned out to be NVIDIA’s not-so-secret sauce. People hoping to profit from “AI” had two choices: (a) They could get your hands on GPUs at nearly whatever price you could do and hope to be first to market. (b) You could use more and slower chips from somebody else, but have to assemble a real programming team and get real programming work done, and be later. And it looks like, of all those seeking to figure out how to take advantage of the AI boom, Apple Computer alone has been willing to move more carefully in the hope that it is more important to get things right then to get there first.

Yes, this is panic-boom-bubble demand in a situation of strongly limited manufacturing fab space. Yes, more than a quarter of NVIDIA’s chips are currently being bought by Microsoft and Facebook as the first bets that it can sell cloud-shovels to AI-gold miners and the second bets… whatever it is betting. There will be value there. It may not flow to anybody’s shareholders. But people will find important and useful things to do with this much computational capability. 

Leave a comment


MAMLM—we can now see much more clearly what its use cases in the 2030 or so timeframe will be. The five that strike me as likely to be most important could be:

  1. Very large-data very high-dimension regression and classification analysis—this is going to be game-changing.

  2. Actual natural-language interfaces to structured and unstructured databases—likely to be the hardest to tame, and the easiest to find itself turned into a dumpster fire.

  3. Auto-complete for everything on Big Steroids. Already autocomplete on my iPhone is, for the first time, useful—and really useful.

  4. The post-crypto boom way of separating gullible VC from their money for the benefit of engineers, speculators, utopians, and grifters—I am not joking, but, unlike crypto and like the original internet build-out, this one is going to have powerful spillover benefits to the rest of us.

  5. Highly verbal electronic-software pets—that is, I think, the best way to conceptualize what we will see in this area.

Refer a friend

(1) Very large-scale very high-dimension regression and classification analysis is going to be, if we can manage to tame and subdue it, truly game-changing: the transformation from the world of the bureaucracy to the world of the algorithm, with not just Peter Drucker’s mass production, not just Bob Reich’s flexible customization, but rather bespoke creation for nearly everything. This is the heart of modern MAMLM-GPTM-LLM technologies. The use of vast datasets to identify patterns and make predictions will be profound in sectors from finance to healthcare. This is likely to be the biggest effect. It is also very hard to estimate—for if we could estimate how big it will be and what it will be good for, we would have already been able to do much of it.

(2) Actual natural-language interface to databases—likely to be the hardest to tame, and the easiest to find itself turned into a dumpster fire, but also one that potentially hits the sweet spot vastly more than the transformation from UNIX and UNIX-like text commands to the WIMP interface. Talking to MAMLM-classified databases through LLM layers will provide us with much more intuitive and efficient access to what are become our increasingly vast repositories of information. Github Copilot has shown us the enormous utility of an LLM hooked up to a largely trustable if unstructured database of coding examples. But do not judge the long-term usefulness of these technologies by what we have seen so far. Hooking up an LLM to the unstructured database that is the internet—which is what ChatGPT4 and company are—gets you what you should have expected and do deserve when you search for what follows the pages on the internet that are most like the prompt. And then you, the distributor of the LLM, frantically and urgently try to cover up what comes out via RLHF and prompt engineering. 

I will say that the funniest thing to happen in February was to learn what it takes to make the Effective Accelerationist community say that Sundar Pichai is deploying technologies too fast, and needs to be fired while we figure out what is going on here: just have a computer model respond to a user by constructing a picture of a Black Pope.

(3) Autocomplete for everything. Potentially, half an hour of more saved for every white-collar worker on the planet. Enough said.

(4) There was an acronym in the Old British Empire: FILTH: Failed in London, try Hong Kong. FICTA should be our less snarky equivalent: Failed in crypto, try AI. Be very careful with your wallet. Especially hold on to your wallet when the AI booster is a or has close connections with past crypto boosters. But becoming the post-crypto boom way of separating gullible VCs and other investors from their money for the benefit of engineers, speculators, utopians, and grifters is a thing that, in the case of the original internet had and in this case is likely to, have a substantial upside. Some of the projects of engineers, speculators, and utopians will be useful and profitable. Others will be unprofitable, but very useful indeed. And still others will be unprofitable and useless, but will in retrospect provide giants’ shoulders for future useful projects to stand on.

(5) And highly verbal electronic-software pets will be fun. Dogs have successfully reprogrammed us to serve them in only 20,000 years—a mere 10,000 iterations of the genetic algorithm. Yet that has not been to our detriment. Stories-in-books have been around for only 4,000 years. I would not care to speculate about the number of people today whose parasocial relationship with their imaginary friend Fitzwilliam Darcy is stronger and more rewarding than the one they have with nearly all of their real actual friends. And yet I cannot say that Jane Austen’s Pride and Prejudice is anything other than an enormous net plus for human society.

Give a gift subscription


We cannot see the effects of MAMLM technologies in our GDP accounts. We may never. Typically almost all of the societal benefits and gains from write-once run-everywhere information goods evade measurement by our crude consensus indicators. Or maybe we do see them: recent U.S. numbers on productivity economy-wide are very interesting.

I would not say that we are going to see $13 trillion of added measured world annual GDP by 2030 from MAMLM. I would not say that we would see that $13 trillion by 2030 if we could construct the economic societal utility numbers we would really wish to have. But I will say that half of that looks clearly within reach.

Get 33% off a group subscription


Now how about the downside?

I regret that I frequently confuse people worried about “AI-alignment” with those who worry about ’AI-safety“. This is bad, because, as the University of Texas’s Scott Aaronson says: ”they hate each other”.

Some of the AI-safety people are worried about technological unemployment. To worry about this, you have to believe in one or more of:

  1. Demand for information-technology work will not continue to expand so that the lower the price, the higher the total spending.

  2. Demand for information-technology work will not continue to expand, so that the richer the society, the higher proportion of total spending on information technology.

  3. A substantial part of white-collar infotech work today is mere “paper shuffling”—does in fact consist of jobs that are fully automatible. That many paper-shuffling jobs never did require a full Turing-class computational engine—the kind found inside the bones at the top end of your typical East African Plains Ape—but could be entirely done by a much more stupid system as long as it could kinda-sorta understand human natural language.

Of these, (1) and (2) seem to be clear losers. There is nothing in history that suggests anything other than very high price and income elasticities for information, and thus for the labor that creates and disseminates it. At Walt Disney studios nearly a century ago people hand-drew animations on individual film cells. 60 animators and 100 assistants—perhaps $20 a day for the animators, and $5 a day for the assistants (adjusted for inflation, that is in today’s dollars $100,000 a year for the animators and $25,000 a year for the assistants), for a total production cost of $1.5 million (adjusted for inflation, $40 million). My cousin Phil Lord produced last year’s “Spiderman: Across the Spiderverse” using an awful lot of GPUs by any standard other than cypto-mining or GPT-training. And his film employed more artists and programmers than worked on “Snow White” by a factor of perhaps three.

We have seen technological unemployment where one fairly small piece of a rigid value chain gets revolutionized. The classic case is handloom weaving: growing cotton was rigid, spinning cotton had become rigid, making apparel was rigid, and only weaving was revolutionized by machine in the two crucial decades. Information technology jobs are by their nature not embedded rigidly in rigid value chains.

In the evolution of these information technologies, I can think of only one cases in which increases in the capability of information technology been a serious thing for job loss. The one is the loss, between 1970 and 2000, of up to 1 million switchboard-operator jobs. Yet most people who held those jobs were simply reclassified under a different classification—one that no longer focused on the eliminated “switchboard” task—because their jobs were no more really to plug things into a switchboard than the job of a Manhattan doorman was ever to be just the person who opened the door.

Non technological-unemployment AI-safety concerns? I think they are there. But they are nothing special. And they are badly managed because too many people who can buy too many politicians benefit from the misalignment of societal values with market prices. We had Purdue-Sackler pharmaceutical risk addicting people to oxycodone by the millions. We had Moody’s mortgage-security assessment risk nearly bringing down the world economy in 2008. Elon Musk claimed to Kara Swisher that neither he nor any member of his family would get the Covid vaccine because their natural immunity was so strong. For every 15 excess Trump over Biden voters in an American county, one more person has died of Covid. Yes, these technologies will bring risks because people will misuse them. But are AI technologies special? I, broadly, think not.

Donate Subscriptions


And as for the AI-alignment view that we are the crucial generation that is under existential threat and will set the moral compass in stone for all of humanity’s future, because we are on the verge of creating truly superhuman superaccelerating intelligence, and that the key players in doing so are moral philosophers with Silicon Valley connections? Words, really do, fail me. I do not think I have been that narcissistic and grandiose since the days when I was beginning to wonder if I might, in fact, not be the only intelligent creature in the universe, and perhaps there might be another mind inside the giant milk source that appeared to be chirping at me.

Share


Let us review the stages of infotech development:

  • Language, sometime further back than –100,000.

  • Clay for mnemonics, –3500.

  • Writing, –3000.

  • Paint.

  • Papyrus (and ink).

  • The scroll.

  • The alphabet.

  • The codex.

  • White space.

  • Arabic numerals.

  • Algebra & algorithms.

  • Writs.

  • Tables of contents, pagination, indexes, glossaries, and reference lists.

  • Fill-in forms.

  • Logarithmic, trigonometric, & navigational tables.

  • Telegraphs.

  • Typewriters.

  • White-out.

  • Punch-card tabulating systems—feed a punch card through a grid of electrified wires so that a current will turn an electromechanical counter, as patented by Herman Hollerith in January 8, 1889.

These were our information technologies as of June 16, 1911, at the founding of the Computing-Tabulating-Recording Company that was to become IBM. And the beginnings of what we now call the “computer” as a machine rather than as a human labor-market job classification.

Now we primarily use computers for—or, at least, computers are primarily used by and for me for:

  1. Recording and adding up columns of numbers–what “computers” did back when a “computer” was a human job category rather than a machine.

  2. Retyping edited documents.

  3. Rerunning simulations for changed parameters.

  4. Stuffing information into databases

  5. Searching and extracting information from databases.

  6. Sending and receiving mail messages.

  7. Getting and reading books, magazines, newsletters, and newspapers.

  8. Getting and listening to audio.

  9. Getting and watching video.

  10. Searching and extracting information from geographic databases (I.e., Uber).

Very useful little things, computers are. As were the tools of pre-computer information technology, all the way back to language that made us truly an anthology intelligence.

A huge amount of white-collar information processing donkey-work changed from being burdensome, laborious, boring, and ever so time-consuming to getting done at near-lightspeed.

And it will continue with, right now to hand:

  1. Very large-data very high-dimension regression and classification analysis.

  2. Actual natural-language interfaces to structured and unstructured databases.

  3. Auto-complete for everything on Big Steroids.

  4. The post-crypto boom way of separating gullible VC from their money for the benefit of engineers, speculators, utopians, and grifters.

  5. Highly verbal electronic-software pets.

  6. And more…

Share Brad DeLong’s Grasping Reality


References:

2700 words
2024-03-14 Th DRAFT