Program Notes


what is it like to be an Atlantic columnist?

#

This is a fascinating story in The Atlantic on the scholarship (and, inevitably, politics) of “experiential relativity” — the hypothesis that human emotional and affective experience differs dramatically, perhaps almost incommensurably, between times and places, and that only in globalized Enlightened modernity do we take for granted that, say, “sadness” or “pain” is the same everywhere and always. (Indeed, something like this is a core presupposition of globalized Enlightened modernity.) Two notes:

  1. Gal Beckerman does pretty well, I think, with the philosophical and historical issues at play, but there is at least one howler: “The universalism that Boddice [the profiled scholar] mistrusts is a relatively new concept in human history. It comes to us from the Enlightenment. The presumption that all people share a common nature was dreamed up by European intellectuals sitting in their salons.” No, this presumption most proximately comes to us from the New Testament, interpreting the Old Testament datum of humanity created “in God’s image” (Gen. 1:26) by way of the classical humanism expressed in, say, Terence’s dictum, as “God hath made of one blood all the nations of the earth” (Acts 17:26). Yes, there is a signature Enlightenment refraction of this. No, they did not “dream it up.”

  2. One wonders what the late Alasdair MacIntyre, with his interest in the incommensurability of traditions, would have made of this.

the case of Wagner

#

As an example of Wagner’s misunderstanding of his own expressivist genius, take his most self-conscious effort at a classicist work, Die Meistersinger von Nurnberg. Here is a — wonderful and energetic! — performance of the Prelude. Wagner pitches it in a faux-Baroque register and dials back the adventurous harmonies a few ticks, such that it sounds by far the most conventional of his operas (compare even the earlier Prelude to Lohengrin, which is hardly Tristan und Isolde in its adventurousness.) It is an intriguing experiment for the expressivist-in-chief. And it is hard to deny that this piece is a lot of fun!

But aesthetically speaking, the experiment is a failure. Even when writing in a self-consciously restrained idiom, Wagner is unable to actually exercise any classicist restraint. (One hears a voice from Delphi intoning sternly: “Nothing in excess!”) There is hardly a passage that does not have at least one too many things going on at once, and often two or three.

The problems are signaled from the very beginning. It begins in classical style, with a four-bar subphrase, and then a two-bar gesture that follows the first two bars rhythmically while moving toward a different part of the harmonic map. So far, so promising. (Though even in the first bar, Wagner hints that he wants to escape the mold harmonically with an undeniably tasty passing whole-tone triad over VI — A•C#•F.) But then the theme goes off in a new direction: a three-bar descending sequence that helps establish (in its second bar) a dominant pedal tone, followed by four bars of intense string noodling over the dominant pedal tone, resolving to a… restatement of the main theme that begins in the subdominant! Because of this irregular phrase structure of 4+2+3+4, with no harmonic movement of any kind in the last subphrase (indeed much of it is exact repetition), the restatement of the theme arrives — somehow — aurally both too late and too early. And after that restatement begins, Wagner gives us an exhaustingly thick texture, in which for eleven consecutive bars (note that we have another 13-bar chunk here) there is not a single eighth-value without movement somewhere in the orchestra. The lines seem to climb indefinitely to the secondary dominant, ratcheting the musical tension up and up (and up and up and up)… until we finally get the first major resolution. Then follows an actually quite lovely passage that introduces the lyrical second theme (whatever else one wants to say about him, Wagner is good for a lyrical second theme), twice in full, then twice fragmenting, and it is all feeling very proportionate and balanced… until the strings start buzzing around while the winds are still unspooling the theme, with an irregularly beaten scalar figure that causes one to completely lose track of the meter. The opening texture returns with the bold third theme, which is actually set in nice, symmetrical four-bar phrases most of the way to the next subject (only one random five-bar phrase thrown in there)… but it is thick and loud the whole way, without much real development, and with so much activity everywhere in the orchestra that it is hard to attend to any one line. Wagner gives us a great deal of sound and fury (cheerful fury, at least), but signifying really very little.

For another example of this aesthetic dynamic, listen to the short quasi-fugal passage starting at 6:07 (p. 22 in the unfortunately unmeasured score). There are three different fugal subjects or motives — the main subject, which is taken from the third theme, starts in the winds; the accompanying subject, which seems to be new in this section, begins in the celli; and the repeated trill motif, which appears to be drawn from the preceding lyrical passage. It is not illegitimate to have three elements in your contrapuntal texture. But Wagner cannot stop himself from using all three right from the beginning, rather than introducing them one at a time. From the start of the passage it is impossible to tell by ear which of the two subjects Wagner intends to be primary. I find it telling that, in the video linked above, the video editing cuts from string section to string section with each statement of the accompanying subject, even though at the beginning of the passage Alain Altinoglu actually cues the winds for their primary (and more melodic!) subject, and only gives the celli a secondary cue. Altinoglu makes the correct choice — but can you blame the video editor for not following him? The most egregious choice is that the trill motif, after being quietly introduced in the bassoons in the first measure, is given to the second violins in the second measure and marked sforzando. It is an orchestra-wide sforzando, to be sure, but Wagner has not given us a chance to hear the main subject(s) in this setting before making the second violins rudely interrupt the (quieter) winds and celli! It sounds like someone has come in too early — and not in the way of the famous horn entrance at the recapitulation in Beethoven’s Third Symphony. Either it is aesthetic incompetence, or it is a deliberate thumbing of the nose at the fugue (which might count as aesthetic incompetence).

For ten minutes, it is practically all like this: nearly constant (almost frenetic) activity in the subsidiary voices, lines that go on just a bit too long, elements that are repeated too many times, such that it is almost impossible to actually attend to any individual element as it belongs to the whole. Instead, the listener must instead accept the whole, lean back in his or her chair, and be sucked under by the aesthetic tsunami. None of this makes, as I said above, the piece any less fun (for the audience; it is rather a pain to perform, at least as a string player). And indeed it has a kind of showiness that is characteristic of the whole Wagnerian Schule, independent of style. But it is one thing to embrace (or at least tolerate) the showiness and the excess when the work in question is chiefly expressivist. It is another when the work aspires, or at least appears to aspire, to meet and embody certain classical standards of excellence. Nobody forced Wagner to write in this idiom; he composed it in 1862, when he was comfortably established and internationally famous. The showiness and the excess indicates that its composer does not understand the real value and spirit of classicism, and its link with expressivism.

expressivism & classicism

#

The great works of European art music exist in an aesthetic field defined by the dipole of classicism and expressivism, the two generative sensibilities that drive (or drove) developments in musical style between about 1700 and 1950. By “dipole” I am not picturing a Venn diagram but something more like a magnet:

As in the magnetic field, classicism and expressivism are dialectical and interpenetrating, not antithetical and opposing. There is no “pure” specimen of either. No one work, or composer, ever exemplifies one sensibility to the total exclusion of the other. In one period a work may be predominantly classicist despite its composer utilizing musical language and form that were originally developed from expressivist impulses. (Note: I use these two terms here in largely ahistorical ways — small-c and small-e, not the capitalized historical movements Classicism and Expressivism — for lack of better terms occurring to me at the moment.)

The mark of classicism is the aspiration to balance and perfection in musical presentation. The classical sensibility yields the sort of work about which one thinks: “There was not a single note out of place” — even if, as the Emperor is supposed to have told Mozart, there may have been “too many notes.” Stereotypically, Western music loves four-bar phrases, clean chord progressions with well-prepared resolutions, standard accompaniment figures and phrases (the Alberti bass being the most famous), and the like: these are hallmarks of the classical. There is a self-conscious inhabiting of traditional forms, even as they may be innovated or subverted in various ways. To be sure, there may be musical surprises, but they do not feel experimental. To the listener there is little or no sense of struggle in the act of composition, no matter how dramatic the music itself may be. To the performer the chief difficulty is making the music seem effortless, regardless of the technical challenge it may pose. The overriding impression in the greatest of these works is of exquisite craftsmanship, occasionally of an almost unearthly or inhuman perfection.

The mark of expressivism is the aspiration to communicate the hitherto incommunicable, to somehow reach across the gulf between composer, performer(s), and audience. (Note the asymmetry between the two core aspirations.) The expressive sensibility yields the sort of work about which one thinks: “That was so powerful!” — even if, in places, it seemed overwrought or difficult to follow. (Bertrand Russell’s remark about Wagner’s opera is apposite: “marvelous moments, and dreadful quarters of an hour.”) Traditional forms and stereotyped devices are used, but not loved; they are the composer’s vehicle, not his or her habitation. Every aspect of the music is, if not actually experimental, a potential site for experimentation; there are not so much musical “surprises” as a more or less steady experience of “surprise.” The listener is expected to not just hear but feel the sense of personal exertion that has gone into the composition; even the less dramatic moments reveal the struggle for expression by temporarily concealing it. To the performer the chief difficulty is summoning the emotional vigor to make the music seem sufficiently effortful. The overriding impression in the greatest of these works is of overwhelming genius, that the composer has somehow expressed the previously inexpressible.

The greatest of classicists is, of course, Mozart. There is nobody to match him — except perhaps Schubert, who stands after him but in the same rank. The second rank of classicists includes Mendelssohn, Chopin, Fauré, Tchaikovsky (yes, a classicist by temperament, except perhaps revealing his expressivist side in the Sixth Symphony!), Rachmaninoff, and the late Stravinsky (there’s something about those Francophile Russians). I am unsure whether to say Haydn is a classicist or an expressivist at heart; probably a classicist, albeit one who was toying with expressivism before it had come to full flower. Richard Strauss had the fullness of classicism within him — especially present, perhaps, in his Eine Alpensinfonie, in his Violin Sonata, and in some passages of the early tone poems. Mahler, too, wrote some marvelous classical passages, though mostly integrated into overall expressivist works — especially the waltz movements in his earlier symphonies; in his later period, the Sixth and Eighth Symphonies are remarkably classicist works despite the force of their expression.

The greatest of expressivists, who ushered this sensibility into maturity after Haydn had disclosed a new measure of its potential, is Beethoven. In his earliest works, one can hear him toying brilliantly (if sometimes unimpressively) with the classicism of his teachers, at times sounding impatient to get on to writing in his own way. His Third Symphony is still the touchstone expressivist (and Romantic) work, often imitated but never bettered, with its astonishing self-confidence, its total mastery of and almost equally total disregard for musical convention. More subtle in this regard, but no less masterful, are his late string quartets, especially Opp. 130 and 131. (But in the Fifth and Sixth Symphonies especially Beethoven showed that his embrace of expressivism did not indicate a total repudiation of classicism.) Wagner undoubtedly aspired to be, and maybe imagined himself to be, the greatest of expressivists, but he did not understand its hidden and tragic secret: that it depends irreducibly on the dialectical tension with the classicist pole for its power. Among Beethoven’s successors, the greatest expressivist accomplishments are those of Schumann (in the solo piano works), Mahler (in the Second and Ninth Symphonies), and Strauss (Ein Heldenleben), though they also at times exhibit the tragic tendency of expressivism to cut loose from classicism and thus lose itself. Also deserving mention are the French luminaries of expressivism, Debussy and Ravel. The early Schoenberg (cf. Verklärte Nacht and the first string quartet) had the promise of greatness, but his turn to anti-tonality was his undoing. Dmitri Shostakovich, long after much of European music had followed Schoenberg down his disastrous path, continued cultivating the genius of expressivism, as did his Soviet colleague Sergei Prokofiev.

In the middle zone of the dipole, offering remarkable and singular syntheses of these two sensibilities, stand J. S. Bach and Brahms. Perhaps less brilliant than those two, but great nevertheless, is Anton Bruckner, who offers his great expressiveness with remarkable musical economy and in a (yet more remarkable) spirit of humility. And the Vier letzte Lieder of Strauss dwell in the same extraordinary territory.


Next I need to read Nietzsche’s The Birth of Tragedy — whose title, I just learned, originally continued … Out of the Spirit of Music — to see how closely my intuition here maps to his famous juxtaposition of Apollo and Dionysos.

Brahms, modernity, and tradition

#

I have written suspiciously little about music for a blog entitled “Program Notes”. Well, last week — May 7, to be exact — was Johannes Brahms’s birthday (happy 192nd, Johannes!), so it seems a fitting moment to write down some thoughts I have had floating around for a while.

Brahms occupies a curious place in Western musical history. Among the pantheon of “great composers,” he is perhaps the first who was not self-consciously an innovator. (Perhaps Mozart is a counter-example? But if in his youthful period one hears nothing but an almost uncanny perfecting of the Classical style, the mature works of his final decade disclose a latent genius for musical innovation that at times nearly shatters the mold; as with Schubert, one can only imagine how different music would have been had he been given more time.) During his own lifetime Brahms was known as a notable musical conservative, a protégé of the Schumanns and an inheritor of the Beethoven legacy, in contrast to the self-consciously progressive and experimental followers of Liszt (especially Wagner) — the (hyperbolically) so-called “War of the Romantics.” There is no single genre with which he is singularly identified, which he can be said to have (re)invented, transformed, or redefined — unlike Mozart with the opera, Schubert with the song cycle, Liszt with the tone poem, Mahler with the symphony, or Beethoven with pretty much everything (but especially the symphony and the string quartet). Nor is there an instrument whose technique is distinctively and permanently marked by his influence — unlike Bach for the violin and the keyboard, Beethoven and Chopin and Liszt for the modern piano, or Mendelssohn and Wagner (quite differently from one another) for orchestration. His style pays constant homage to Bach’s finely tuned counterpoint, Haydn’s sense of proportion, Beethoven’s ear for the dramatic flair, and Schumann’s expressive melody; but it is hard to say, whatever it might mean to say it, that in any of these areas he “improves” upon his forebears.

Part of all this, no doubt, is that Brahms was a notorious perfectionist — spending nearly twenty years writing and rewriting his First Symphony, and burning the manuscripts of more than a dozen string quartets he considered inadequate. But one can equally say of “perfectionism” that it is an unwillingness to measure oneself by any standards that transcend or relativize those one is given. The First Symphony, after all, took twenty years because it had to be worthy to publish after Beethoven’s Ninth (a burden which Schumann and Mendelssohn had notably not felt). Beethoven’s sheer artistic self-belief (and self-regard) was what permitted him to dispense with the artistic conventions he inherited from Haydn and Mozart, and every great composer after Beethoven considered that to be truly great one must at least try to be like Beethoven in this respect. Every great composer, that is, except for Brahms. He alone seemed to think it worthy to simply and creatively conserve the traditions he inherited, offering to posterity a handful of finely polished gems in which, like the Silmarils of Fëanor, the light of now-past ages is caught and distinctively refracted. And a small handful indeed: in the genres which his great forebears had seen, or had come to be seen, as offering special artistic statements — the string quartet, the piano trio, the piano sonata, and above all the symphony — he left just a few pieces each: string quartets three, piano trios three, piano sonatas three, and symphonies four. If there is a genre in which he was, perhaps, the greatest “innovator” of his day, it is that genre which most self-consciously honors the past: the theme and variations.

All this may sound curiously negative, as though I am suggesting (as Richard Strauss said about himself) that Brahms is “not a first-rate composer, but a first-class second-rate composer.” Not so. Brahms, in his totality, is certainly greater than Strauss (who, as the Brits say, routinely over-eggs the pudding a bit — though that masterpiece of his twilight years, the Vier letzte Lieder, deserves to stand in the first rank). The best passages in Brahms are as transcendently great as anything in Beethoven or Mozart. I am thinking, specifically, of the last five minutes (102 bars) in the first movement of the First Piano Concerto, though there is any number of passages I could spotlight. This movement, and this passage in particular, exemplifies all the best qualities of his writing: the organic expansion of just one or two simple musical cells into a vast whole; a remarkable economy of both counterpoint (there are rarely more than two separate lines moving simultaneously) and orchestration (somehow creating a full, sustained sound without Wagnerian orchestral busy-work); judicious exploitation of the flexible, propulsive rhythms available in his long triple meter, keeping the energy flowing through long yet elegantly balanced melodic lines; and the perfectly seamless, almost invisibly prepared transition from the calm light of the second theme to the darkness of the coda (at bar 438, 22:03 in the above linked recording), like a great cloud slowly obscuring the face of the Sun. There is nothing pretentious, nothing self-serious, nothing indulgent in Brahms. Everything is heartfelt, often even passionate, but utterly sincere. Where Mozart’s music sounds effortless, almost too perfect to be real, and Beethoven’s music sounds immensely effortful, every note as if written with blood — well, Brahms’s music sounds, simply, human: the music of human life, life as really lived, not as larger than life.

Now indulge me as I offer some speculation. In his masterpiece of criticism Real Presences (1989), George Steiner draws attention to the “broken contract” between logos and kosmos, immanent language and transcendent reality: the gulf (so he argues) at the heart of modern humanity’s sense of alienation. If there is no God, there is no “real presence” in anything we say: our words are meaningless. (Steiner himself was, notably, unable to believe in God: throughout Real Presences he writes of God and the transcendent as one who longs for but cannot himself have them.) And it was in the 1870s, Steiner suggests, that European critics and scholars began to advocate for detaching logos and kosmos. I am not even one-thousandth the expert on European arts and letters that Steiner was, but I cannot help noting that in the realm of music, this is precisely the period when tonality and tradition — the so-called “Common Practice” — begins to break down. Wagner’s “Tristan chord” (premiered 1865) is often seen as the touchstone for this development: the first public statement by a leading composer in a major work that the boundaries of tonality and the “rules” of voice-leading could be breached for the sake of expression. Of course, one is not terribly hard pressed to find Tristan-chord-like harmonies and resolutions in earlier composers (Schumann!), but it is hard to deny that there is something… flagrant? iconoclastic? Promethean? in the use Wagner there makes of it. And in any case, the floodgates opened in the 1870s and onward — with Verdi, Franck, Saint-Saëns, and Mussorgsky (all born before 1850) following Wagner in preparing the ground for really major innovations by Puccini, Mahler, Debussy, and Strauss (born after 1850) within a just-barely-tonal paradigm. By the beginning of the 20th century, Ravel and Scriabin were conjuring essentially non-tonal landscapes, and Schoenberg was developing the twelve-tone paradigm by which he sought to banish the concept of a single tonal center from his music — a deliberate repudiation of kosmos in favor of (a highly mathematical and schematized notion of) logos. All that was (apparently) solid melted, in the course of a few decades, into air.

And it is in precisely those decades, in the midst of so much musical chaos, that we find Brahms at work. He is a son of his age, not of some other age; he is not, and cannot be, a mere repristinator. But he is that son not as an innovator, but as a creative conserver, aware of how rich is his inheritance and seeking to make good use of it. What we hear in Brahms is always something new, but never something novel. Perhaps this is the way — the only way? — to flourish in modernity.

LLMs and education

#

Note: Hand over heart, I substantially drafted this post — including its core conceit — long before reading Josh Brake’s latest Substack post. No plagiarism here!


I have two sorts of problems with “AI” in general and Large Language Models (LLMs) in particular. One is the (infinitely ramifying) ethical problem. LLMs are built on deception. They are not human (and not “alive”), do not possess human cognitive faculties, and cannot “know” anything in the ordinary human sense of that word, and yet their model is built on — after vacuuming up an enormous amount of human-created linguistic “content” — mimicking human cognition and knowledge to such an effective degree that you spend all your time relying on GPT-4o or what have you, rather than other human beings. I take this to be a fairly straightforward form of deception, and because of the incommensurability of truth and falsehood, this first problem to be the most fundamental. What does constantly being deceived, and constantly self-deceiving, do to a human being? In what ways are we damaging, and might further damage, ourselves by using such a false tool? (See also: Mammon.) But that’s for another post.

The second is the education problem. Here my fears are well illustrated by an analogy from James C. Scott’s Seeing Like a State:

The principles of scientific forestry [TC: planting a single “crop,” in evenly-spaced rectangular grids, in place of the old ecologically diverse forests] were applied as rigorously as was practicable to most large German forests throughout much of the nineteenth century. The Norway spruce… became the bread-and-butter tree of commercial forestry. Originally [it] was seen as a restoration crop that might revive overexploited mixed forests, but the commercial profits from the first rotation were so stunning that there was little effort to return to mixed forests… Diverse old-growth forests, about three-fourths of which were broadleaf (deciduous) species, were replaced by largely coniferous forests in which Norway spruce or Scotch pine were the dominant or often only species. In the short run, this experiment in the radical simplification of the forest to a single commodity was a resounding success… the negative biological and ultimately commercial consequences of the stripped-down forest became painfully obvious only after the second rotation of conifers had been planted… An exceptionally complex process involving soil building, nutrient uptake, and symbiotic relations among fungi, insects, mammals, and flora—which were, and still are, not entirely understood—was apparently disrupted, with serious consequences. Most of these consequences can be traced to the radical simplicity of the scientific forest. … Apparently the first rotation of Norway spruce had grown exceptionally well in large part because it was living off (or mining) the long-accumulated soil capital of the diverse old-growth forest that it had replaced. Once that capital was depleted, the steep decline in growth rates began.

To apply the analogy: Maybe, just maybe, you can implement LLMs without too many problems in the first generation, among a population of adults who have already been educated. Their values have already been formed; they have already learned to read and write and think critically. (This already concedes far too much to the “AI” boosters, but for the sake of the argument, we will not pause overlong.) Perhaps they really could achieve the stunning productivity growth which we are constantly promised (though so far the results don’t seem great!). But even if that were true, can you expect those gains in the second generation, among children who are still being educated? Or would you rather expect systemic failure to ever form values, to learn critical thinking, essential reading comprehension, and basic writing skills? The adults who received pre-LLM educations have an existing store of cognitive and intellectual capital on which to draw as they encounter and learn to use LLMs. But children who never experience education without LLMs will never have the chance to develop that capital.

Furthermore, the broader environment in which this “first rotation” is encountering LLMs is not remotely the same as that in which the “second rotation” will encounter them. Indeed, the environments are being treated as if they are the same, when they should be different. My local school district is now integrating “AI” into primary and secondary education, because “universities and employers will expect AI literacy” — what tool is easier to learn to use than a natural language chatbot? Now, the workplace may appropriately demand certain kinds of efficiency from adult workers, and LLMs may just prove their usefulness in such cases (though in my view the jury is still out). Education, by contrast, should be inefficient, frictional, resistive. The mind is like a muscle: in order to grow, it must be repeatedly stretched to the limits of its capacity. The LLM chatbot is the ultimate anti-friction, super-efficient (except in, you know, water and energy) machine, which promises that you will never encounter resistance ever again; with the new “reasoning” modules, you’ll never have to think for yourself again. The implications for education hardly need to be spelled out.

Scott continues:

As pioneers in scientific forestry, the Germans also became pioneers in recognizing and attempting to remedy many of its undesirable consequences. To this end, they invented the science of what they called “forest hygiene.” In place of hollow trees that had been home to woodpeckers, owls, and other tree-nesting birds, the foresters provided specially designed boxes. Ant colonies were artificially raised and implanted in the forest, their nests tended by local schoolchildren. Several species of spiders, which had disappeared from the monocropped forest, were reintroduced. What is striking about these endeavors is that they are attempts to work around an impoverished habitat still planted with a single species of conifers for production purposes. In this case, “restoration forestry” attempted with mixed results to create a virtual ecology, while denying its chief sustaining condition: diversity.

I leave the resonances between this virtualized ecology and the state of education today as a trivial exercise for the reader.

(Scott’s remarks here of course have many parallels. Ivan Illich makes a remarkably analogous argument, with respect to medicine, in the opening of Tools for Conviviality; and Michael Polanyi offers a structurally similar observation about the Enlightenment “critical movement” that sought to banish belief from knowledge: “its incandescence had fed on the combustion of the Christian heritage in the oxygen of Greek rationalism, and when this fuel was exhausted the critical framework itself burnt away.”)

twelve theses and predictions on "AGI" (falsely so-called)

#
  1. Artificial general intelligence,” defined as “a computer able to do any cognitive task a human can do” — as envisioned for example in this new work of science fictionis computationally impossible to achieve.

  2. This is because “intelligence” — in the sense of “normal human intelligence,” which is presupposed by the above definition of “AGI” — is a) impossible to fully and simultaneously articulate (hereon inarticulable) and b) non-deterministic, and therefore in at least two senses strictly non-computable.

  3. The inarticulability of intelligence has (at the very least) to do with its embodied and relational aspects. “Mind” is neither identical with nor even co-extensive with “brain activity”; rather, “mind” is (to crib from Dan Siegel’s definition) is an embodied and relational process. Emotion in particular seems, as far as the causality can be determined, to be body-first, brain-second, such that it is only articulable after the fact (and in a way that changes the emotional experience). Michael Polanyi’s great work demonstrates in a philosophical register what musicians, artists, and craftspeople have always known intuitively: that the “cognitive task” of playing an instrument or using a tool depends on integrating the instrument or tool into one’s bodily experience, in an inarticulable way. And relationship through interaction with other embodied minds is such a complex process, with so many emergent layers, that not only is it poorly theorized or modeled now, it may be impossible to exhaustively theorize or model — especially because it primarily seems to take place in and through the pre- and in-articulate dimensions of cognition.

  4. Meanwhile, the non-determinism of intelligence has (at the very least) to do with quantum randomness effects in the brain, which at the mesoscale (the level at which daily human, and most complex organic, life takes place) emerge into relatively well-understood and predictable patterns, but at the nanoscale (the relevant level for a hypothetical deterministic model of cognition) are by definition impossible to predict, or even observe without altering them. I am unaware of any good reason to think the quantum effects in, say, an extremely large and inorganic GPU farm, would be interchangeable with or even meaningfully similar to those in a three-pound organic human neural system.

  5. What is computationally possible, as far as I can tell, is a (relatively) high-fidelity simulation of one aspect of human cognition: the comparatively deterministic, hyper-articulated aspect of human cognition which Iain McGilchrist identifies as characteristic of the left hemisphere (hereon LH) of our brains (subject, of course, to obvious caveats from theses 2–4). Note: I am not saying, and I do not take McGilchrist to be saying, that a fully-computed model of the LH itself is possible; only that its characteristic thought-style can be simulated in high fidelity, precisely because that thought-style is comparatively deterministic and hyper-articulated.

  6. In currently existing frontier Large Language Models (LLMs), I take it something like this has already been achieved. Commercially available LLMs are now (to use a technical term) pretty good at processing and reproducing both written and spoken natural language — albeit in such a sterile “voice” that it renders the phrase “natural language” almost meaningless — and quite good at analytically processing huge quantities of formally similar information. These are two of the characteristic specializations of LH cognition, and I expect the next generation of LLMs to be significantly better on both fronts. Notably, some of the persistent failure modes of LH cognition and of LLMs are startlingly similar: “hallucination” or fabrication of nonexistent supporting evidence, a predilection for lying or circumventing rules in order to achieve a desired result, an inability to attend to wholes at the expense of parts, and so forth.

  7. Because much of contemporary Western life (as McGilchrist and others have extensively documented) is already organized to systematically advantage that aspect of human cognition, it is therefore no surprise or, in a sense, any remarkable accomplishment that frontier models now perform at the level of PhD students in solving advanced physics problems (albeit ones with solutions known to currently existing physics), or that some chatbots now “pass the Turing Test.” This is the natural end result of reimagining science as “knowledge production” and credentialing scientists accordingly, or of technologically reducing the typical person’s normal experiences of and capacity for conversation to so great an extent that we now take what the LLMs offer to be “human” conversation. This — and all the attendant social/economic disruption (about which more below) — is all possible without “AGI” itself being computationally feasible.

  8. The second strike against the possibility of “AGI” comes from limits in physical resources. Achievements in LLM development up to this point have been enabled by energy use, water depletion, and resource extraction on an already massive scale. The anticipated investments required for “AGI” (e.g., according to AI 2027, $2 quadrillion in new data centers over the next 10 years!!!) will require exponentially more energy, water, and mineral resources that we either simply do not have on this planet or cannot physically extract from it at the desired rate (unless we invent, say, cold fusion). This is to say nothing of the land required to build all of the new infrastructure. I therefore anticipate that “AI” development will, as a function of resource scarcity, fail to get anywhere close to the scale of investment theoretically required for “AGI.” This may only become clear to “AI” developers, however, after they have already inflicted genuinely ruinous and probably irreversible damage to the environment and to the communities that depend on it.

  9. Considering all this, I find it probable that without ever achieving “artificial general intelligence” as imagined in science fiction, advances in “AI” over the next several years will make all but the top 1–5% of current “symbolic capitalists” functionally obsolete. This includes both high-status sectors such as consulting, finance, advertising, software development, law and legal services, etc., and lower-status (or at least lower-paying) sectors such as journalism, copywriting, teaching, administration, graphic design, the social sciences, etc. (Note that several of these lower-status professions are ones which the Internet revolution has already been destroying.) By “functionally obsolete” I mean that it will almost always be more cost-effective, and nearly as useful, to “employ” an “AI agent” for a task that previously required one to hire a human being.

  10. Sectors that are symbolic-capitalism-adjacent but require long training in embodied skill — e.g., healthcare, the experimental sciences, mechanical engineering, war — will not be functionally obsoleted, at least not so thoroughly. An inorganic robot will never be able to perform skilled tasks in the real world with the same level of ability as a trained human being (see (3) above)… and “organic robots” capable of such skill would pretty much just be, well, lab-grown humans, with many of the same inefficiencies and time-delays as regular humans. (Only a conspiracy theorist would see current Silicon Valley investments in IVF, genetic selection and editing, and artificial wombs as an attempt to create the conditions of possibility for lab-grown humans… right???) But some current features of jobs in these sectors — the features, that is, which are most akin to “AI” core competencies — will be permanently outsourced to “AI agents.”

  11. The “trades” and the “crafts,” on the other hand, will not become thoroughly automated, though they will be in various ways automation-directed and -augmented. Machine maintenance and repair, for instance: machine failure might be AI-diagnosable, but the intuitive skill necessary for actual repairs will remain the province of humans. To deal with water, you’ll always need a plumber. Reality has a surprising amount of detail, and fields like construction and mining will always require meaningful and skilled human attention to reckon with that detail. Agriculture represents an interesting test case: a field that is currently extremely mechanized, but as the lowest-skilled tier of human labor becomes (out of necessity) far cheaper to “buy,” one which may reabsorb much of that excess labor capacity. At the more humanistic end of the spectrum, traditional crafts might make a comeback of sorts (similar to the vinyl resurgence), and the performing arts will always be the province of human beings, though probably far fewer people will be performing artists in fifteen years than are right now; in both cases patronage will be the only economically viable model. For the ultra-wealthy, owning or sponsoring something evidently made only by humans will be a status symbol.

  12. In sum: I believe we are headed neither for the existential-risk, civilization-ending disaster scenarios envisioned by the “AI Doomers,” nor for the golden era of peace and prosperity and universal basic income envisioned by the “AI optimists.” (Where, exactly, do the optimists think the value creation for UBI will come from in an era of mass human unemployment?) Rather, I suspect in the near-ish term we are headed for a poorer, less culturally vibrant, less highly educated world with much greater wealth inequality. This will be a world in which many more people, including some who might otherwise have been symbolic capitalists, work in various kinds of manual labor or “trades”: agriculture, mining, energy, construction, maintenance. Others will depend, one way or another, on the patronage of the few ultra-wealthy. The whole service-economy apparatus that depends on a large leisure class will be semi-permanently diminished in proportion. It might, in other words, look in certain ways remarkably like the period of transition into the Industrial Revolution.

Over the long run, I believe in the resilience of humanity, chiefly because I believe in the faithfulness of God to His wayward creatures. We will not be destroyed or superseded by a higher form of intelligence, nor will we manage to completely destroy ourselves. We are remarkably adaptable and creative: life always finds a way. But we will find that the remarkably widespread prosperity of the last few decades in particular and the last two centuries in general is not, once unlocked, a permanent and automatic feature of human existence. It has depended on our irretrievably consuming the planet’s resources at an ever-accelerating rate. What cannot go on indefinitely must eventually stop. The mechanization snake will finally eat its own tail. The only question is how soon.


Addendum (08.15.2025): Well, this has had much more of an afterlife than I expected. And we have had four months more of AI development (i.e., several lifetimes). What do I think now about all of the above? So far, I feel quite as confident about items #1–7 as the day I wrote them. I am nearly as confident about #8, though I recognize that energy/water use per query is something of a moving target and that many of the major AI developers are investing in energy solutions. (Here’s a science fiction scenario for you: What if we came up with, say, a non-destructive way to siphon geothermal energy from the Yellowstone supervolcano, powering everyone’s Claude Code instances for eons while diverting a real civilizational X-risk?) My greater environmental concern remains the resource extraction — and corresponding ecosystem degradation/destruction, at points of both origin and destination — necessary to build the proliferating data centers that will be necessary to build more and more capable models, and to supply the infrastructure upgrades that get water and energy to those data centers. (This is to say nothing of the environmental justice questions about where data centers are actually built.) The typical American utility depends on infrastructure that is decades old, whose construction was predicated on a long-deceased funding model (i.e., massive federal subsidies), and which is now desperately in need of repair at practically every point. It’s like the old joke about True Libertarians: How will they drive wherever they want, as fast as they want, if there aren’t any roads? Color me skeptical that AI developers will actually prioritize massive upgrades to local infrastructure in places where they build their data centers, though I would love to hear stories to the contrary.

The big questions come, of course, with #9, and perhaps to a lesser degree #10–12. I have looked back on #9 a number of times and thought: “What was I thinking?” 95–99% of current symbolic capitalists is a pretty large number! So let me offer a couple of qualifications and clarifications. First, by saying “probable,” I think I meant “more than 50% probability.” (In my lexicon, I think I use “unlikely” for <10%, “possible” for 25–50%, “likely” for, oh, I don’t know, 60–75%, and “almost certain” for >90%.) I would have been comfortable saying then, and maybe still am, that it is “almost certain” 50% of current symbolic capitalists will be functionally obsoleted (more on that term in a second). Note that this is not the same thing as Dario Amodei’s “half of all entry-level white-collar jobs.” There are a lot of even entry-level white-collar workers — especially in healthcare — who are not pure symbolic capitalists, in Musa al-Gharbi’s sense, even if they presently do a great number of symbolic-capitalist things in the course of their everyday work; my #10 was meant in part to correct against this misconception. And, even in the most successful AI development scenario I can reasonably imagine, functional obsolescence of the bottom 95% of symbolic capitalists does not mean that all those people go “oh well, guess I’d better become a farmhand.” It does mean they move down the prestige and remuneration ladder in various ways, but probably does not mean they have to leap off and start again from the bottom. It also means that fewer young people even aspire to enter those fields in the first place, instead proactively entering less automatable fields (anecdotally, at all colleges outside the absolute most elite tier, student interest in nursing programs is just skyrocketing). And it means that the gates behind which the true symbolic-capitalist jobs are kept become ever more difficult to unlock. A PhD has long offered no guarantee of success, or even of a job, in any field. Now some of the most ostensibly prestigious PhDs are becoming, literally, impossible to get.

“Functional obsolescence” is also an important term for my argument in thesis #9, one which I did not define in quite the way I should. My definition was that “it will almost always be more cost-effective, and nearly as useful, to ’employ’ an ‘AI agent’ for a task that previously required one to hire a human being.” The term “cost-effective” should be clear. “Useful” is obviously less so. What I basically meant is that, to a manager, the short-term quality of work that an “AI agent” can do for the entry-level tasks that require little contextual knowledge will seem so impressive that they will, by default, choose to defer or forgo the long-term gains of adding another human person to the team. In any organization, a new hire, no matter how intelligent or experienced, needs some time to learn the ropes, and depending on the context it may be not just weeks or months but years before the institution sees its return on investment, as it were. (There is a different calculus for primarily internal-facing and primarily external-facing roles here; I suspect the functional obsolescence of the sales account manager, whose personal touch with the client is in fact part of the package sold to the client, will be far slower to arrive than the functional obsolescence of the sales account manager’s executive assistant.) This does not prevent a company having an explicit value on hiring and developing human employees, such that they are willing to forgo the short-term advantages of AI agents. I am immensely grateful to work for one such company. I suspect there are many other symbolic-capitalist companies that are still talking this people-centric talk, but are quietly making preparations to cease walking the walk as soon as it becomes practical to take a different path.

The other main reason that #9 seems far less plausible to a lot of people now than it perhaps did four months ago is the stubborn persistence of certain kinds of AI failures and hallucinations. It still makes up references to nonexistent sources (though less often than it used to); it still offers impossible and logically laughable ex post facto rationalizations for “conclusions” at which it claims to have “arrived” via “chains of thought,” thus obscuring its inner workings (though, admittedly, real human beings are awfully good at this as well); it still can’t reliably tell you how many B’s there are in the word “blueberry” or how many U.S. state names contain the letter R. The recent much-hyped launch, and notable failures, of OpenAI’s GPT-5 has Maria Sukhareva announcing that “AI Winter is coming.” And indeed, if this indicates that we are reaching the upper bound of the logistic curve and the rate of progress is leveling off, then “AI agents” will not be anywhere as good “in the next several years” as is necessary to functionally obsolete 95–99% of current symbolic capitalists. They will still be an amazing technological feat by any metric, but like many such feats of the industrial era, they will not actually decrease the absolute amount of work there is to be done, but simply reshape much of the existing work around themselves. They will fail to be transformative in the way that, say, the telegraph was to the nineteenth-century general or journalist (completely reshaping the possibility structure in which their work is done), instead becoming something more like the washing machine to the mid-century housewife (removes a source of drudgery, freeing up time for… other kinds of drudgery). I was deliberately vague in saying this could happen in “the next several years” — recall: the transformer architecture on which our contemporary LLMs are built was invented in 2017, which in my lexicon is only “several years ago”! — so let’s wait until, say, 2032 to render final judgment. Unfortunately, my intuitions here are still pretty pessimistic. I do think GPT-5’s failures indicate some systemic misconceptions (see, again, items #1–7), but humans can get awfully, even terrifyingly, far operating in a purely LH mode of cognition. We can have genuinely passed the inflection point on the logistic curve, after which we will see fewer and fewer dramatic breakthroughs and more and more high-profile failures… and still have a long, long way to go.

The larger point to be made, and the place where, in my view, real unpredictability enters the picture, is that none of these developments happen in laboratory-style isolation from their reception in culture. (If there is any such thing as “laboratory-style isolation,” on which again see Personal Knowledge.) That reception, as far as I can tell, is not going well. The proliferation of “AI slop” is incredibly unpopular, even if at this point few people can reliably distinguish it from “real,” human-grade slop. I cannot imagine parents (at least in upper-middle-class contexts) long enduring the indiscriminate application of chatbot-based ed-tech products to their kids’ classrooms; Jonathan Haidt’s The Anxious Generation seems to be striking too strong a nerve for that. If (speaking extremely loosely here) the washing machine created “the problem that has no name” for mid-century housewives, thus accelerating the development of second-wave feminism, what social upheaval(s) might the washing machines of the AI field accelerate or unleash? I have no idea. And those will shape the outcomes I predict in #11 and, especially, #12. So with that (very large) asterisk placed next to them… I more or less stand by them. Rogue AI is not going to decide it needs to wipe us all out with commandeered nuclear weapons to protect its colonization of the universe (the doomsday scenario in “AI 2027”), nor will properly aligned AI usher in a drudgery-free UBI paradise (the golden-age scenario in “AI 2027”). Both of those scenarios depend on an inflationary and wrong view of AI’s capability to genuinely imitate human intelligence. Instead, the real damage will be done by those purportedly racing to prevent the first and usher in the second, who are on public record as having zero clue what their actual end goal or desired end point is. They seem to imagine it will go on forever and ever, growth without end. But Stein’s Law is undefeated, because it is in fact a restatement of the second law of thermodynamics: “What cannot go on forever must eventually stop.”

the Plato project

#

Reflections on Plato’s dialogues — or, if I break it out as a separate post, links to reflections — to follow below. The order is that of the Ukemi Audiobooks series The Socratic Dialogues, which dramatizes Benjamin Jowett’s translation with a full cast of great British actors (headlined by David Rintoul as Socrates). Jowett’s translation may be “out of date” from a scholarly perspective (which I am unqualified to judge) but in Rintoul’s hands (vocal chords?) is enduringly lucid. Ukemi also organizes the dialogues loosely according to a traditional early-middle-late periodization, which I gather is a contested approach, but it doesn’t seem to harm the understanding for a first pass. (I’m already suspecting that the “dramatic ordering,” following the chronology of Socrates’ life as best that may be reconstructed, might be more fruitful… but that’s for a second round, and I’m just beginning the first!)

Early Period

  1. Apology. A barnstormer to start in medias res — better, near the end of things. We meet Socrates for the first time as he defends himself, before the assembly, against the charges laid at his door: of being an evildoer and “making the better appear the worse,” of being an atheist and introducing new deities, and of corrupting the youth. He does not succeed, though he is condemned by only a small margin. Socrates here introduces a number of key motives in the corpus: his claim to “know nothing at all” and thus to only be the “wisest” by exposing everyone else’s ignorance (which makes him quite unpopular); the deceptiveness of rhetoricians, who know how to speak elegantly and persuasively, but know really nothing of the Good and therefore of how to make men better; his own role as a sort of “gadfly,” provoking the polis into active self-reflection which it might otherwise neglect, and seeking thus to improve it; the absolute priority of caring for the soul over against all other cares (of property, wealth, body, etc.), and the absolute refusal to employ any tactics unworthy of the soul; the “daemon” or voice of God — or Conscience — speaking to him and infallibly guiding him toward the right course of action, though all public opinion be against him; his real indifference — perhaps, even here, optimism! — in the face of death, but absolute service to the truth. We also get a taste of the dialectic style as he cross-examines his accuser Meletus. It is an extraordinary bit of writing by Plato, moving and sweeping and incisive. Apology thus introduces and crystallizes the brilliant literary paradox of the Socratic corpus: Socrates disclaims all “rhetoric” and “elaborate defence,” portraying himself as a humble and artless seeker of wisdom — using brilliant rhetoric and elaborate defensive strategies to demolish his opponents’ arguments. I loved Apology, and expect to revisit it with great enjoyment, but there is undoubtedly something inhuman and irritating (gadfly-like!) about Socrates. One understands instantly why Socrates had so many admirers in his own day (including Plato), and why Plato’s Socrates has been such a titanic figure in the history of thought and culture; and, equally, just why Socrates made so many enemies. Most of all I chafe at his claim that “the life which is unexamined is not worth living.” Is it not the other way round: no life which is lived is worth leaving unexamined?
  2. Crito. A simple but moving dialogue, set in prison on the night before Socrates’ execution, on the question: “Is it right to disobey an unjust law?” Socrates’ answer in this case, of course, is No. The titular Crito (also mentioned in Apology) comes to him in prison and makes one last effort to persuade Socrates to escape his condemnation. But — despite his complaint in Apology that his trial was not conducted with full propriety — Socrates is determined to accept the death penalty meted out by the state. The most curious, and seemingly central, feature of the dialogue is the lengthy portion spoken by Socrates in the voice of the personified Laws of Athens. How, the Laws ask Socrates (and thus Socrates asks Crito), can one who is so personally committed to justice defy the demands and decisions of justice?
  3. Charmides. Now we flash back several decades, and get going with our first, though assuredly not last, “What is X?” The X in question is the virtue of temperance.
  4. Laches.
  5. Lysis.
  6. Euthyphro. “What is piety?”
  7. Menexenus. A parody of the funeral oration genre, in which the ostensible praise of Athens and of great Athenian heroes turns out to just yield a series of digressions, backhanded criticisms, and trite aphorisms.
  8. Ion.
  9. Gorgias. Is it okay to really rather dislike this dialogue? It is long, repetitive, and occasionally mean-spirited. The subject matter is of great importance, of course: moving from the more specific question “what, if anything, does a teacher of rhetoric need to know about goodness?” to the general question “what is the best way of life?”. Yet in these early dialogues Plato does not often set up Socrates’ interlocutors as particularly compelling or thoughtful — see Euthyphro or Ion and their namesakes — but in Gorgias he seems to regard, and Socrates seems to treat, all three of Gorgias, Polus, and Callicles with barely-disguised contempt. And they are, in differing ways, worthy of contempt (less so, perhaps, Gorgias).
  10. Protagoras.
  11. Meno.
  12. Euthydemus. A merciless satire on sophistry. At first Socrates is baffled, then infuriated, then bemused, then amused by the “method of contradiction” employed by the brother sophists Euthydemus and Dionysodorus; finally he pulls himself together and shows himself a master at it, if he chooses. There is a substantive philosophical point lurking within the mockery, though. Euthydemus and Dionysodorus are boxers who have but lately taken up sophistry (in order to make money and increase their reputation). They have grasped that the key to successful sophistical argumentation is equivocation: exploiting multiple meanings of their opponents’ words in order to catch them in apparent contradictions. Of course, as soon as one scrutinizes their arguments, these fall to pieces, so the sophist must keep his opponent permanently off balance and give him no room to strike back. Sophistry is dialectic reduced to boxing: a contest of strength and speed in which one hit is as good as another. The true philosopher, however, cares not at all for victory, but only for the pursuit of truth. And truth requires valid arguments, clear and consistent definitions, and patient exploration.
  13. Lesser Hippias.
  14. Greater Hippias.

Middle Period

  1. Symposium.
  2. Theætetus. Fantastic. Far and away the most enjoyable, dare I say riveting, of the dialogues so far. “What is knowledge?” Must revisit and write a longer reflection.
  3. Phædo.
  4. Phædrus.
  5. Cratylus. Some people, apparently, say this dialogue is “tedious.” I had the exact opposite reaction! (Perhaps I am a tedious person…) Admittedly, for the first two thirds, I repeatedly thought, “Surely you can’t be serious!”, as Socrates offered increasingly speculative and unsupportable folk etymologies for all sorts of words (though the more abstract a concept denoted by a word, the less speculative it seemed to me) to supposedly show that the relation between a word and the thing it represents is not arbitrary or merely conventional, but is based on nature… only to experience philosophical whiplash in the final third as Socrates dismisses that linguistic theory and argues that words are given by convention and have no necessary naturalistic aspect!
  6. Parmenides. This one is fascinating, and demands revisiting. A precocious, but philosophically underdeveloped, nineteen-year-old Socrates meets Zeno (he of the Paradoxes) and the famous Parmenides. Socrates knows the teaching of the great Heraclitus that all things are in constant flux and motion (“You cannot step into the same river twice”): the One is an illusion, the Many is all. Parmenides and Zeno, on the contrary, propose to show that eternal reality is unchanging and flux is impossible: the Many is an illusion, the One is all. Socrates, mock-naïvely, proposes a synthesis: all earthly things are indeed in perpetual flux, but they derive their thing-ness from participating in eternal unchanging Forms or Ideas. Parmenides, somewhat unexpectedly, dismantles this proposal with six increasingly devastating counter-arguments, exposing all sorts of internal contradictions, absurdities, infinite regresses, and the like. But then… Parmenides flips the script and sets out to show, in tremendous (and occasionally mind-numbing) specificity, how one might after all defend a theory of Ideas as logically coherent. Does he succeed? Can the One and the Many be held together? What is the real point of the deductions? It’s hard to say. I must reread it, and write a longer reflection.
  7. Republic. Fascinating, riveting, eye-opening: “oh, that’s where that comes from!” a million times. Must revisit. Must write a longer reflection.

Late Period

  1. Timæus. Whatever the opposite of riveting is; I really, really struggled for motivation to keep listening to this one. I know it’s one of the most influential texts in the history of Europe, but even with the capable David Timson reading the part of the eponymous monologist, I found my attention slipping over and over again.
  2. Critias. It’s Númenor! Or, really, Númenor is Atlantis: “But even the name of that land perished, and Men spoke thereafter not of Elenna, nor of Andor the Gift that was taken away, nor of Númenórë on the confines of the world; but the exiles on the shores of the sea, if they turned toward the West in the desires of their hearts, spoke of Mar-nu-Falmar that was whelmed in the waves, Akallabêth the Downfallen, Atalantë in the Eldarin tongue.” More seriously, we do get hints — reminiscent of Republic (which takes place, dramatically, just the previous day) — at the Platonic ideal for a political constitution.
  3. Sophist. The follow-up to Theætetus is not quite as much fun, though it introduces a fun new hermeneutical device: most of the philosophical exposition is not in the mouth of Socrates, who is a mere spectator, but spoken by a nameless Stranger from Elea (home of Parmenides and Zeno). The bulk of the dialogue consists in the search for a single definition via numerous “divisions” and “classes” — much more similar in some ways to Parmenides (to which it makes reference) than to its ostensible precursor. And of course the sophist as a figure is an unflattering subject. It’s quite interesting, however, after hearing Plato decidedly privilege the One over the Many in Republic, to hear some… back-pedaling, maybe? Perhaps the One and the Many can be held together after all. Dramatically speaking, Parmenides is set at the very outset of Socrates’ philosophical career, whereas Theætetus, Sophist, and Statesman are said to take place at nearly the end of his life.
  4. Statesman. A direct continuation from Sophist, though Socrates takes over from Theætetus as the Eleatic Stranger’s primary interlocutor.
  5. Philebus. At one point near the three-quarters mark of this dialogue, Protarchus, who is Socrates’ principal interlocutor, remarks to the philosopher, “Your many repetitions make me slow to understand.” Socrates responds, infuriatingly, “As the argument proceeds, my boy, I dare say that the meaning will become clearer. Protarchus’ dry response, “Very likely,” sums up my experience of this dialogue. Here is an undoubtedly sophisticated, mature, exacting reflection on a classic Socratic-Platonic theme — the superiority of a life spent seeking wisdom to a life spent seeking out pleasure — whose intelligibility is compromised by its repetitiveness. The argument is just difficult to follow. Socrates multiplies distinctions, which no doubt are useful, in service of the general thesis that the enjoyment of pleasure (and its coordinate, the avoidance of pain — though how, precisely, they are coordinated is one of the many subjects of discussion) is not the highest good in life, but rather a faculty like any other, which admits of distortions and falsities, and which therefore cannot be the highest good of a human life. Here there are none of the dramatic fireworks of the earlier Gorgias which touches on similar themes (and which is referenced occasionally). It was, however, worth listening to this dialogue just for the hilarious aside near the beginning in which Socrates describes those young men who are first intellectually thrilled by the paradoxes of One and Many (15e—16d); not much about Philosophy Bros has changed, it seems, in at least 2400 years.
  6. Laws.

One recurrent theme throughout Plato’s work, increasingly prominent in the later dialogues (though I recall it as early as Euthydemus), is the challenge posed for his theory of knowledge by falsehood or false knowledge. The problem goes something like as follows. Everyone agrees that there are things called falsehoods which we can utter. Yet, logically speaking, this should not be possible. After all, we speak using words; the meaningfulness of words depends on their signifying things that really have existence; there are no words to speak of non-existence; therefore, we can never speak of that which does not exist; so also we can never speak falsely but can only speak the truth. Similarly, we can never know anything false, but always and only know things that are true; our difficulties come not from false knowledge, which is strictly speaking a contradiction in terms, but from ignorance alone. The argument sounds persuasive when considered abstractly, yet it yields an obviously ludicrous conclusion! It receives its most extended treatment, if I recall correctly, in Sophist, where the Eleatic Stranger explores the problems raised by the term “non-being”. What does the term “non-being” actually indicate?

There is something here formally similar to — and no doubt influential upon — the evidently unsolvable (in the technical sense, absurd) problem of evil in the Christian tradition. God, Who created all things, is (on the classical-theistic view) perfectly good, perfectly knowledgeable, and perfectly capable. He must therefore have created all things perfectly. Furthermore, as He is (by definition) the unique Creator, no creature can contravene His created design or overrule His will if it wanted to. So where does evil come from? For it is evident to all that something has gone horribly wrong. Does it come from some kind of deliberate possibility for evil which He gave to His creatures as part of their creation? If so, how is He not the creator of evil also? But if that is the case, how can He be perfectly good? For that matter, how would a perfectly good Creator be able to conceptualize the possibility of evil so as to deliberately create it? The limitless perfections of classical theism seem to be in tension. But the alternatives are even less appealing. If evil is somehow inherent in the nature of creatureliness, such that anything with any limitations at all has not only a potentiality for but an actuality in evil, then either “evil” is a fundamentally relativized category with no real purchase, or it might be better to never have been created at all. Or if the Creator is limited in any of His moral goodness, knowledge, or capacity, one must suppose that evil might be able to permanently and ultimately gain the upper hand over Him and His creatures. One could fall back on saying that evil cannot exist, because it is a logical impossibility with no satisfactory explanation — yet we have a strong and near-universal intuition that it does exist.

pivoting to Plato

#

With the beginning of this year, I have determined to patch some of the (very large) holes in my reading of the classics. I have never read Plato or Aristotle in any sort of panoptic way, let alone later major philosophers of antiquity such as Seneca or Plotinus; my reading of the Church Fathers has been almost entirely occasional and extremely selective; it has been years since I have read either the Iliad or the Odyssey (and I have in fact never read the Aeneid). My major reading for roughly the last two years has instead focused on the characteristic novelties and problems of modernity, as articulated by modern writers: George Steiner’s Real Presences, James C. Scott’s Seeing Like a State, Lorraine Daston’s Rules, Michael Polanyi’s Personal Knowledge, Jason Josephson-Storm’s The Myth of Disenchantment, Erazim Kohák’s The Embers and the Stars, and Alasdair MacIntyre’s After Virtue and Three Rival Versions of Moral Enquiry; in a more explicitly scriptural/theological key, my teacher Jeremy Begbie’s Abundantly More, my teacher Kavin Rowe’s essays on New Testament hermeneutics, Brevard Childs’ Biblical Theology of the Old and New Testaments, Albert Schweitzer’s The Quest of the Historical Jesus, Ephraim Radner’s Time and the Word, and Andrew Louth’s Discerning the Mystery; and, of course, the granddaddy of them all (by at least volume if not temporality), Iain McGilchrist’s The Master and His Emissary and The Matter With Things.

If your guiding intellectual question is “how shall we live with integrity as Christians in modernity?”, as I am beginning to suspect mine is, this body of literature possesses obvious importance. I am nowhere close to having plumbed the full depths of this tradition (or complex of traditions), and do not intend to stop reading in this area. My reading project on the nature of tradition will bring me back up to the present age with (at least) Gadamer, Lindbeck, and more MacIntyre, and I have several more major works of twentieth and twenty-first-century philosophy and theology already waiting for me on my shelves (Heidegger, Cassirer, Adorno & Horkheimer, Bultmann, Frei, Jenson, Rosa, and so forth). And I’m currently reading through David H. Kelsey’s Eccentric Existence, which (whatever else, good or ill, I might say about it) represents a one-man (two-volume) masterclass in theological engagement with modernity. So in no way am I withdrawing my attention from modernity. Rather, two things have crystallized my sense that it is time to turn (at least more of) my attention to the Old Things.

The first is that I have found myself increasingly overpowered by what I call in shorthand the “I do not understand Hegel” problem. The great theologians and philosophers of the not-too-distant past — and, still, the greatest in the present — were staggeringly, now almost incomprehensibly, literate and erudite figures. Before publishing his great work on hermeneutics, Gadamer was a noted expert on the pre-Socratics. Karl Barth is sometimes accused of not having read the tradition fairly, but he has never been accused of not having read it thoroughly. Brevard Childs seems to have truly read every book ever written. Part of what makes Hegel singularly difficult is, of course, his ruthlessly abstract and intensely tedious style; but no doubt another part is that very few people today are educated the way that he and his peers were. Take a slightly more recent example: what man of letters teaching at the University of Michigan today would dare assign his undergraduate students a reading list like W. H. Auden’s? If philosophy and theology are the Great Conversation, one must learn to discern and hear the enduring presence of the older voices who have left the room before one can truly contribute or at least understand.

The second is that, despite the immensity of my to-read list and the paucity of my already-read list, I do feel that I reached an inflection point with the turning of the year. That was when I finished reading Karl Barth’s Protestant Theology in the Nineteenth Century — the bulk of which is actually about eighteenth-century philosophy and theology as the “background” to nineteenth-century theology; and it must be said that Barth appears to enjoy writing about Rousseau, Kant, Hegel, and so forth a great deal more than the nineteenth-century theologians who are the book’s ostensible subject — and an unofficial trilogy by Lesslie Newbigin: Proper Confidence, Foolishness to the Greeks, and The Gospel in a Pluralist Society. These, somehow, coordinate in my mind: Barth and Newbigin (who was, not coincidentally, heavily influenced by Barth) together outline the negative space for and sketch the positive content of the properly Christian post-liberal synthesis which we desperately need — or which, at any rate, I need in order to feel intellectually satisfied. In the coming months, as the intellectual dust from my aforementioned reading settles, I may take a few stabs at describing what seem the chief features of that synthesis. But I also sense, if dimly, that in order to know what I really mean by those features, I will need some more pre-modern context and contrast. I can thus leave Barth and Newbigin for a little while, confident that I will return to them better able to understand what is fruitful in what they offer.

It is high time, then, that I actually read Plato and Aristotle (not to mention Seneca and Plotinus); that I (begin to) read through the Church Fathers; that I revisit Homer (and meet Vergil anew). I am doing so as follows. For Plato, I have launched into the Ukemi Audio series dramatizing the Socratic dialogues (in Benjamin Jowett’s translation), with the astounding David Rintoul as an unforgettable Socrates — and intend to write here, for my own benefit, at least a short reflection on each dialogue. For the Fathers, the obvious place to start is Volume I of the old Schaff set, with Sts. Ignatius, Justin, Irenaeus, and their comrades. With the Iliad, which I have at least read before (perhaps more than a decade ago), I have cracked open Emily Wilson’s recent translation. In none of these cases is the point a deep, doctoral-seminar level understanding. Rather, the point is familiarity, breadth, and fresh inspiration: to drink deep from the old and honored wells.

building well

#

A good and tough word from A. G. Sertillanges:

Those who aim at what is beyond their powers, and thus run the risk of falling into error, who waste their real capacity in order to acquire some capacity that is illusory, are also men of curiosity in the olden sense… Do not overload the foundation, do not carry the building higher than the base permits, or build at all before the base is secure: otherwise the whole structure is likely to collapse. What are you? What point have you reached? What intellectual substructure have you to offer? These are the things that must wisely determine your undertaking. “If you want to see things grow big, plant small,” say the foresters; and that is, in other words, St. Thomas’s advice. The wise man begins at the beginning, and does not take a second step until he has made sure of the first. That is why self-taught men have so many weak points. They cannot, all by themselves, begin at the beginning.

— A. G. Sertillanges, O.P., The Intellectual Life: Its Spirit, Conditions, Methods (tr. Mary Ryan), 27.

Goal for the next stage of my intellectual life: Answer his questions. Begin again from the beginning.

Barth on Hegel

#

In turning away from Hegel the [nineteenth century] acknowledged that, having reached the summit of its desires and achievements, it was dissatisfied with itself, that this was after all not what it had intended. It set Hegel aside and tried again, but did not even reach such a peak a second time, and thus manifestly it was bound to be even less satisfied than it was before, although it pretended to be. Where does the fault lie? In Hegel? Those who study him will not receive this impression. If it is a question of doing what the entire nineteenth century evidently wanted to do, then Hegel apparently did it as well as it could possibly be done. Or is the reason that afterwards the age of the great men was past, that there was no genius present in the second half of the century to carry out the better things which the century it seems had in mind in turning away from Hegel? But it is always a bad sign when people can find nothing to say but that unfortunately the right people were lacking. This should be said either always or never. Every age, perhaps, has the great men it deserves, and does not have those it does not deserve. The question only remains, whether it was a hidden flaw in the will of the age itself, perfect as the expression was that it had found in Hegel, which was the reason why it could not find any satisfaction in Hegel and therefore not in itself, and yet could not find any way of improving upon and surpassing Hegel, and therefore itself. It might of course be possible that Hegelianism indeed represented in classic form the concern of the nineteenth century, but precisely as such came to reveal the limited nature of this concern, and the fact that it was impossible to proceed from it to the settlement of every other question of truth. And that for that reason it was, curiously, condemned.

— Karl Barth, Protestant Theology in the Nineteenth Century, 374. The whole lecture is an absolute tour de force: elucidating both what, for both philosophers and theologians, makes Hegel such an immensely attractive option — and why Hegel, taken on his own terms (like nineteenth-century thought as a whole), ultimately represents a cul de sac for those disciplines.