twelve theses and predictions on "AGI" (falsely so-called)
#-
“Artificial general intelligence,” defined as “a computer able to do any cognitive task a human can do” — as envisioned for example in this new work of science fiction — is computationally impossible to achieve.
-
This is because “intelligence” — in the sense of “normal human intelligence,” which is presupposed by the above definition of “AGI” — is a) impossible to fully and simultaneously articulate (hereon inarticulable) and b) non-deterministic, and therefore in at least two senses strictly non-computable.
-
The inarticulability of intelligence has (at the very least) to do with its embodied and relational aspects. “Mind” is neither identical with nor even co-extensive with “brain activity”; rather, “mind” is (to crib from Dan Siegel’s definition) is an embodied and relational process. Emotion in particular seems, as far as the causality can be determined, to be body-first, brain-second, such that it is only articulable after the fact (and in a way that changes the emotional experience). Michael Polanyi’s great work demonstrates in a philosophical register what musicians, artists, and craftspeople have always known intuitively: that the “cognitive task” of playing an instrument or using a tool depends on integrating the instrument or tool into one’s bodily experience, in an inarticulable way. And relationship through interaction with other embodied minds is such a complex process, with so many emergent layers, that not only is it poorly theorized or modeled now, it may be impossible to exhaustively theorize or model — especially because it primarily seems to take place in and through the pre- and in-articulate dimensions of cognition.
-
Meanwhile, the non-determinism of intelligence has (at the very least) to do with quantum randomness effects in the brain, which at the mesoscale (the level at which daily human, and most complex organic, life takes place) emerge into relatively well-understood and predictable patterns, but at the nanoscale (the relevant level for a hypothetical deterministic model of cognition) are by definition impossible to predict, or even observe without altering them. I am unaware of any good reason to think the quantum effects in, say, an extremely large and inorganic GPU farm, would be interchangeable with or even meaningfully similar to those in a three-pound organic human neural system.
-
What is computationally possible, as far as I can tell, is a (relatively) high-fidelity simulation of one aspect of human cognition: the comparatively deterministic, hyper-articulated aspect of human cognition which Iain McGilchrist identifies as characteristic of the left hemisphere (hereon LH) of our brains (subject, of course, to obvious caveats from theses 2–4). Note: I am not saying, and I do not take McGilchrist to be saying, that a fully-computed model of the LH itself is possible; only that its characteristic thought-style can be simulated in high fidelity, precisely because that thought-style is comparatively deterministic and hyper-articulated.
-
In currently existing frontier Large Language Models (LLMs), I take it something like this has already been achieved. Commercially available LLMs are now (to use a technical term) pretty good at processing and reproducing both written and spoken natural language — albeit in such a sterile “voice” that it renders the phrase “natural language” almost meaningless — and quite good at analytically processing huge quantities of formally similar information. These are two of the characteristic specializations of LH cognition, and I expect the next generation of LLMs to be significantly better on both fronts. Notably, some of the persistent failure modes of LH cognition and of LLMs are startlingly similar: “hallucination” or fabrication of nonexistent supporting evidence, a predilection for lying or circumventing rules in order to achieve a desired result, an inability to attend to wholes at the expense of parts, and so forth.
-
Because much of contemporary Western life (as McGilchrist and others have extensively documented) is already organized to systematically advantage that aspect of human cognition, it is therefore no surprise or, in a sense, any remarkable accomplishment that frontier models now perform at the level of PhD students in solving advanced physics problems (albeit ones with solutions known to currently existing physics), or that some chatbots now “pass the Turing Test." This is the natural end result of reimagining science as “knowledge production” and credentialing scientists accordingly, or of technologically reducing the typical person’s normal experiences of and capacity for conversation to so great an extent that we now take what the LLMs offer to be “human” conversation. This — and all the attendant social/economic disruption (about which more below) — is all possible without “AGI” itself being computationally feasible.
-
The second strike against the possibility of “AGI” comes from limits in physical resources. Achievements in LLM development up to this point have been enabled by energy use, water depletion, and resource extraction on an already massive scale. The anticipated investments required for “AGI” (e.g., according to AI 2027, $2 quadrillion in new data centers over the next 10 years!!!) will require exponentially more energy, water, and mineral resources that we either simply do not have on this planet or cannot physically extract from it at the desired rate (unless we invent, say, cold fusion). This is to say nothing of the land required to build all of the new infrastructure. I therefore anticipate that “AI” development will, as a function of resource scarcity, fail to get anywhere close to the scale of investment theoretically required for “AGI.” This may only become clear to “AI” developers, however, after they have already inflicted genuinely ruinous and probably irreversible damage to the environment and to the communities that depend on it.
-
Considering all this, I find it probable that without ever achieving “artificial general intelligence” as imagined in science fiction, advances in “AI” over the next several years will make all but the top 1–5% of current “symbolic capitalists” functionally obsolete. This includes both high-status sectors such as consulting, finance, advertising, software development, law and legal services, etc., and lower-status (or at least lower-paying) sectors such as journalism, copywriting, teaching, administration, graphic design, the social sciences, etc. (Note that several of these lower-status professions are ones which the Internet revolution has already been destroying.) By “functionally obsolete” I mean that it will almost always be more cost-effective, and nearly as useful, to “employ” an “AI agent” for a task that previously required one to hire a human being.
-
Sectors that are symbolic-capitalism-adjacent but require long training in embodied skill — e.g., healthcare, the experimental sciences, mechanical engineering, war — will not be functionally obsoleted, at least not so thoroughly. An inorganic robot will never be able to perform skilled tasks in the real world with the same level of ability as a trained human being (see (3) above)… and “organic robots” capable of such skill would pretty much just be, well, lab-grown humans, with many of the same inefficiencies and time-delays as regular humans. (Only a conspiracy theorist would see current Silicon Valley investments in IVF, genetic selection and editing, and artificial wombs as an attempt to create the conditions of possibility for lab-grown humans… right???) But some current features of jobs in these sectors — the features, that is, which are most akin to “AI” core competencies — will be permanently outsourced to “AI agents.”
-
The “trades” and the “crafts,” on the other hand, will not become thoroughly automated, though they will be in various ways automation-directed and -augmented. Machine maintenance and repair, for instance: machine failure might be AI-diagnosable, but the intuitive skill necessary for actual repairs will remain the province of humans. To deal with water, you’ll always need a plumber. Reality has a surprising amount of detail, and fields like construction and mining will always require meaningful and skilled human attention to reckon with that detail. Agriculture represents an interesting test case: a field that is currently extremely mechanized, but as the lowest-skilled tier of human labor becomes (out of necessity) far cheaper to “buy,” one which may reabsorb much of that excess labor capacity. At the more humanistic end of the spectrum, traditional crafts might make a comeback of sorts (similar to the vinyl resurgence), and the performing arts will always be the province of human beings, though probably far fewer people will be performing artists in fifteen years than are right now; in both cases patronage will be the only economically viable model. For the ultra-wealthy, owning or sponsoring something evidently made only by humans will be a status symbol.
-
In sum: I believe we are headed neither for the existential-risk, civilization-ending disaster scenarios envisioned by the “AI Doomers,” nor for the golden era of peace and prosperity and universal basic income envisioned by the “AI optimists.” (Where, exactly, do the optimists think the value creation for UBI will come from in an era of mass human unemployment?) Rather, I suspect in the near-ish term we are headed for a poorer, less culturally vibrant, less highly educated world with much greater wealth inequality. This will be a world in which many more people, including some who might otherwise have been symbolic capitalists, work in various kinds of manual labor or “trades”: agriculture, mining, energy, construction, maintenance. Others will depend, one way or another, on the patronage of the few ultra-wealthy. The whole service-economy apparatus that depends on a large leisure class will be semi-permanently diminished in proportion. It might, in other words, look in certain ways remarkably like the period of transition into the Industrial Revolution.
Over the long run, I believe in the resilience of humanity, chiefly because I believe in the faithfulness of God to His wayward creatures. We will not be destroyed or superseded by a higher form of intelligence, nor will we manage to completely destroy ourselves. We are remarkably adaptable and creative: life always finds a way. But we will find that the remarkably widespread prosperity of the last few decades in particular and the last two centuries in general is not, once unlocked, a permanent and automatic feature of human existence. It has depended on our irretrievably consuming the planet’s resources at an ever-accelerating rate. What cannot go on indefinitely must eventually stop. The mechanization snake will finally eat its own tail. The only question is how soon.