twelve theses and predictions on "AGI" (falsely so-called)
#-
“Artificial general intelligence,” defined as “a computer able to do any cognitive task a human can do” — as envisioned for example in this new work of science fiction — is computationally impossible to achieve.
-
This is because “intelligence” — in the sense of “normal human intelligence,” which is presupposed by the above definition of “AGI” — is a) impossible to fully and simultaneously articulate (hereon inarticulable) and b) non-deterministic, and therefore in at least two senses strictly non-computable.
-
The inarticulability of intelligence has (at the very least) to do with its embodied and relational aspects. “Mind” is neither identical with nor even co-extensive with “brain activity”; rather, “mind” is (to crib from Dan Siegel’s definition) is an embodied and relational process. Emotion in particular seems, as far as the causality can be determined, to be body-first, brain-second, such that it is only articulable after the fact (and in a way that changes the emotional experience). Michael Polanyi’s great work demonstrates in a philosophical register what musicians, artists, and craftspeople have always known intuitively: that the “cognitive task” of playing an instrument or using a tool depends on integrating the instrument or tool into one’s bodily experience, in an inarticulable way. And relationship through interaction with other embodied minds is such a complex process, with so many emergent layers, that not only is it poorly theorized or modeled now, it may be impossible to exhaustively theorize or model — especially because it primarily seems to take place in and through the pre- and in-articulate dimensions of cognition.
-
Meanwhile, the non-determinism of intelligence has (at the very least) to do with quantum randomness effects in the brain, which at the mesoscale (the level at which daily human, and most complex organic, life takes place) emerge into relatively well-understood and predictable patterns, but at the nanoscale (the relevant level for a hypothetical deterministic model of cognition) are by definition impossible to predict, or even observe without altering them. I am unaware of any good reason to think the quantum effects in, say, an extremely large and inorganic GPU farm, would be interchangeable with or even meaningfully similar to those in a three-pound organic human neural system.
-
What is computationally possible, as far as I can tell, is a (relatively) high-fidelity simulation of one aspect of human cognition: the comparatively deterministic, hyper-articulated aspect of human cognition which Iain McGilchrist identifies as characteristic of the left hemisphere (hereon LH) of our brains (subject, of course, to obvious caveats from theses 2–4). Note: I am not saying, and I do not take McGilchrist to be saying, that a fully-computed model of the LH itself is possible; only that its characteristic thought-style can be simulated in high fidelity, precisely because that thought-style is comparatively deterministic and hyper-articulated.
-
In currently existing frontier Large Language Models (LLMs), I take it something like this has already been achieved. Commercially available LLMs are now (to use a technical term) pretty good at processing and reproducing both written and spoken natural language — albeit in such a sterile “voice” that it renders the phrase “natural language” almost meaningless — and quite good at analytically processing huge quantities of formally similar information. These are two of the characteristic specializations of LH cognition, and I expect the next generation of LLMs to be significantly better on both fronts. Notably, some of the persistent failure modes of LH cognition and of LLMs are startlingly similar: “hallucination” or fabrication of nonexistent supporting evidence, a predilection for lying or circumventing rules in order to achieve a desired result, an inability to attend to wholes at the expense of parts, and so forth.
-
Because much of contemporary Western life (as McGilchrist and others have extensively documented) is already organized to systematically advantage that aspect of human cognition, it is therefore no surprise or, in a sense, any remarkable accomplishment that frontier models now perform at the level of PhD students in solving advanced physics problems (albeit ones with solutions known to currently existing physics), or that some chatbots now “pass the Turing Test." This is the natural end result of reimagining science as “knowledge production” and credentialing scientists accordingly, or of technologically reducing the typical person’s normal experiences of and capacity for conversation to so great an extent that we now take what the LLMs offer to be “human” conversation. This — and all the attendant social/economic disruption (about which more below) — is all possible without “AGI” itself being computationally feasible.
-
The second strike against the possibility of “AGI” comes from limits in physical resources. Achievements in LLM development up to this point have been enabled by energy use, water depletion, and resource extraction on an already massive scale. The anticipated investments required for “AGI” (e.g., according to AI 2027, $2 quadrillion in new data centers over the next 10 years!!!) will require exponentially more energy, water, and mineral resources that we either simply do not have on this planet or cannot physically extract from it at the desired rate (unless we invent, say, cold fusion). This is to say nothing of the land required to build all of the new infrastructure. I therefore anticipate that “AI” development will, as a function of resource scarcity, fail to get anywhere close to the scale of investment theoretically required for “AGI.” This may only become clear to “AI” developers, however, after they have already inflicted genuinely ruinous and probably irreversible damage to the environment and to the communities that depend on it.
-
Considering all this, I find it probable that without ever achieving “artificial general intelligence” as imagined in science fiction, advances in “AI” over the next several years will make all but the top 1–5% of current “symbolic capitalists” functionally obsolete. This includes both high-status sectors such as consulting, finance, advertising, software development, law and legal services, etc., and lower-status (or at least lower-paying) sectors such as journalism, copywriting, teaching, administration, graphic design, the social sciences, etc. (Note that several of these lower-status professions are ones which the Internet revolution has already been destroying.) By “functionally obsolete” I mean that it will almost always be more cost-effective, and nearly as useful, to “employ” an “AI agent” for a task that previously required one to hire a human being.
-
Sectors that are symbolic-capitalism-adjacent but require long training in embodied skill — e.g., healthcare, the experimental sciences, mechanical engineering, war — will not be functionally obsoleted, at least not so thoroughly. An inorganic robot will never be able to perform skilled tasks in the real world with the same level of ability as a trained human being (see (3) above)… and “organic robots” capable of such skill would pretty much just be, well, lab-grown humans, with many of the same inefficiencies and time-delays as regular humans. (Only a conspiracy theorist would see current Silicon Valley investments in IVF, genetic selection and editing, and artificial wombs as an attempt to create the conditions of possibility for lab-grown humans… right???) But some current features of jobs in these sectors — the features, that is, which are most akin to “AI” core competencies — will be permanently outsourced to “AI agents.”
-
The “trades” and the “crafts,” on the other hand, will not become thoroughly automated, though they will be in various ways automation-directed and -augmented. Machine maintenance and repair, for instance: machine failure might be AI-diagnosable, but the intuitive skill necessary for actual repairs will remain the province of humans. To deal with water, you’ll always need a plumber. Reality has a surprising amount of detail, and fields like construction and mining will always require meaningful and skilled human attention to reckon with that detail. Agriculture represents an interesting test case: a field that is currently extremely mechanized, but as the lowest-skilled tier of human labor becomes (out of necessity) far cheaper to “buy,” one which may reabsorb much of that excess labor capacity. At the more humanistic end of the spectrum, traditional crafts might make a comeback of sorts (similar to the vinyl resurgence), and the performing arts will always be the province of human beings, though probably far fewer people will be performing artists in fifteen years than are right now; in both cases patronage will be the only economically viable model. For the ultra-wealthy, owning or sponsoring something evidently made only by humans will be a status symbol.
-
In sum: I believe we are headed neither for the existential-risk, civilization-ending disaster scenarios envisioned by the “AI Doomers,” nor for the golden era of peace and prosperity and universal basic income envisioned by the “AI optimists.” (Where, exactly, do the optimists think the value creation for UBI will come from in an era of mass human unemployment?) Rather, I suspect in the near-ish term we are headed for a poorer, less culturally vibrant, less highly educated world with much greater wealth inequality. This will be a world in which many more people, including some who might otherwise have been symbolic capitalists, work in various kinds of manual labor or “trades”: agriculture, mining, energy, construction, maintenance. Others will depend, one way or another, on the patronage of the few ultra-wealthy. The whole service-economy apparatus that depends on a large leisure class will be semi-permanently diminished in proportion. It might, in other words, look in certain ways remarkably like the period of transition into the Industrial Revolution.
Over the long run, I believe in the resilience of humanity, chiefly because I believe in the faithfulness of God to His wayward creatures. We will not be destroyed or superseded by a higher form of intelligence, nor will we manage to completely destroy ourselves. We are remarkably adaptable and creative: life always finds a way. But we will find that the remarkably widespread prosperity of the last few decades in particular and the last two centuries in general is not, once unlocked, a permanent and automatic feature of human existence. It has depended on our irretrievably consuming the planet’s resources at an ever-accelerating rate. What cannot go on indefinitely must eventually stop. The mechanization snake will finally eat its own tail. The only question is how soon.
Addendum (08.15.2025): Well, this has had much more of an afterlife than I expected. And we have had four months more of AI development (i.e., several lifetimes). What do I think now about all of the above? So far, I feel quite as confident about items #1–7 as the day I wrote them. I am nearly as confident about #8, though I recognize that energy/water use per query is something of a moving target and that many of the major AI developers are investing in energy solutions. (Here’s a science fiction scenario for you: What if we came up with, say, a non-destructive way to siphon geothermal energy from the Yellowstone supervolcano, powering everyone’s Claude Code instances for eons while diverting a real civilizational X-risk?) My greater environmental concern remains the resource extraction — and corresponding ecosystem degradation/destruction, at points of both origin and destination — necessary to build the proliferating data centers that will be necessary to build more and more capable models, and to supply the infrastructure upgrades that get water and energy to those data centers. (This is to say nothing of the environmental justice questions about where data centers are actually built.) The typical American utility depends on infrastructure that is decades old, whose construction was predicated on a long-deceased funding model (i.e., massive federal subsidies), and which is now desperately in need of repair at practically every point. It’s like the old joke about True Libertarians: How will they drive wherever they want, as fast as they want, if there aren’t any roads? Color me skeptical that AI developers will actually prioritize massive upgrades to local infrastructure in places where they build their data centers, though I would love to hear stories to the contrary.
The big questions come, of course, with #9, and perhaps to a lesser degree #10–12. I have looked back on #9 a number of times and thought: “What was I thinking?” 95–99% of current symbolic capitalists is a pretty large number! So let me offer a couple of qualifications and clarifications. First, by saying “probable,” I think I meant “more than 50% probability.” (In my lexicon, I think I use “unlikely” for <10%, “possible” for 25–50%, “likely” for, oh, I don’t know, 60–75%, and “almost certain” for >90%.) I would have been comfortable saying then, and maybe still am, that it is “almost certain” 50% of current symbolic capitalists will be functionally obsoleted (more on that term in a second). Note that this is not the same thing as Dario Amodei’s “half of all entry-level white-collar jobs.” There are a lot of even entry-level white-collar workers — especially in healthcare — who are not pure symbolic capitalists, in Musa al-Gharbi’s sense, even if they presently do a great number of symbolic-capitalist things in the course of their everyday work; my #10 was meant in part to correct against this misconception. And, even in the most successful AI development scenario I can reasonably imagine, functional obsolescence of the bottom 95% of symbolic capitalists does not mean that all those people go “oh well, guess I’d better become a farmhand.” It does mean they move down the prestige and remuneration ladder in various ways, but probably does not mean they have to leap off and start again from the bottom. It also means that fewer young people even aspire to enter those fields in the first place, instead proactively entering less automatable fields (anecdotally, at all colleges outside the absolute most elite tier, student interest in nursing programs is just skyrocketing). And it means that the gates behind which the true symbolic-capitalist jobs are kept become ever more difficult to unlock. A PhD has long offered no guarantee of success, or even of a job, in any field. Now some of the most ostensibly prestigious PhDs are becoming, literally, impossible to get.
“Functional obsolescence” is also an important term for my argument in thesis #9, one which I did not define in quite the way I should. My definition was that “it will almost always be more cost-effective, and nearly as useful, to ‘employ’ an ‘AI agent’ for a task that previously required one to hire a human being.” The term “cost-effective” should be clear. “Useful” is obviously less so. What I basically meant is that, to a manager, the short-term quality of work that an “AI agent” can do for the entry-level tasks that require little contextual knowledge will seem so impressive that they will, by default, choose to defer or forgo the long-term gains of adding another human person to the team. In any organization, a new hire, no matter how intelligent or experienced, needs some time to learn the ropes, and depending on the context it may be not just weeks or months but years before the institution sees its return on investment, as it were. (There is a different calculus for primarily internal-facing and primarily external-facing roles here; I suspect the functional obsolescence of the sales account manager, whose personal touch with the client is in fact part of the package sold to the client, will be far slower to arrive than the functional obsolescence of the sales account manager’s executive assistant.) This does not prevent a company having an explicit value on hiring and developing human employees, such that they are willing to forgo the short-term advantages of AI agents. I am immensely grateful to work for one such company. I suspect there are many other symbolic-capitalist companies that are still talking this people-centric talk, but are quietly making preparations to cease walking the walk as soon as it becomes practical to take a different path.
The other main reason that #9 seems far less plausible to a lot of people now than it perhaps did four months ago is the stubborn persistence of certain kinds of AI failures and hallucinations. It still makes up references to nonexistent sources (though less often than it used to); it still offers impossible and logically laughable ex post facto rationalizations for “conclusions” at which it claims to have “arrived” via “chains of thought,” thus obscuring its inner workings (though, admittedly, real human beings are awfully good at this as well); it still can’t reliably tell you how many B’s there are in the word “blueberry” or how many U.S. state names contain the letter R. The recent much-hyped launch, and notable failures, of OpenAI’s GPT-5 has Maria Sukhareva announcing that “AI Winter is coming." And indeed, if this indicates that we are reaching the upper bound of the logistic curve and the rate of progress is leveling off, then “AI agents” will not be anywhere as good “in the next several years” as is necessary to functionally obsolete 95–99% of current symbolic capitalists. They will still be an amazing technological feat by any metric, but like many such feats of the industrial era, they will not actually decrease the absolute amount of work there is to be done, but simply reshape much of the existing work around themselves. They will fail to be transformative in the way that, say, the telegraph was to the nineteenth-century general or journalist (completely reshaping the possibility structure in which their work is done), instead becoming something more like the washing machine to the mid-century housewife (removes a source of drudgery, freeing up time for… other kinds of drudgery). I was deliberately vague in saying this could happen in “the next several years” — recall: the transformer architecture on which our contemporary LLMs are built was invented in 2017, which in my lexicon is only “several years ago”! — so let’s wait until, say, 2032 to render final judgment. Unfortunately, my intuitions here are still pretty pessimistic. I do think GPT-5’s failures indicate some systemic misconceptions (see, again, items #1–7), but humans can get awfully, even terrifyingly, far operating in a purely LH mode of cognition. We can have genuinely passed the inflection point on the logistic curve, after which we will see fewer and fewer dramatic breakthroughs and more and more high-profile failures… and still have a long, long way to go.
The larger point to be made, and the place where, in my view, real unpredictability enters the picture, is that none of these developments happen in laboratory-style isolation from their reception in culture. (If there is any such thing as “laboratory-style isolation,” on which again see Personal Knowledge.) That reception, as far as I can tell, is not going well. The proliferation of “AI slop” is incredibly unpopular, even if at this point few people can reliably distinguish it from “real,” human-grade slop. I cannot imagine parents (at least in upper-middle-class contexts) long enduring the indiscriminate application of chatbot-based ed-tech products to their kids' classrooms; Jonathan Haidt’s The Anxious Generation seems to be striking too strong a nerve for that. If (speaking extremely loosely here) the washing machine created “the problem that has no name” for mid-century housewives, thus accelerating the development of second-wave feminism, what social upheaval(s) might the washing machines of the AI field accelerate or unleash? I have no idea. And those will shape the outcomes I predict in #11 and, especially, #12. So with that (very large) asterisk placed next to them… I more or less stand by them. Rogue AI is not going to decide it needs to wipe us all out with commandeered nuclear weapons to protect its colonization of the universe (the doomsday scenario in “AI 2027”), nor will properly aligned AI usher in a drudgery-free UBI paradise (the golden-age scenario in “AI 2027”). Both of those scenarios depend on an inflationary and wrong view of AI’s capability to genuinely imitate human intelligence. Instead, the real damage will be done by those purportedly racing to prevent the first and usher in the second, who are on public record as having zero clue what their actual end goal or desired end point is. They seem to imagine it will go on forever and ever, growth without end. But Stein’s Law is undefeated, because it is in fact a restatement of the second law of thermodynamics: “What cannot go on forever must eventually stop.”