I have written suspiciously little about music for a blog entitled “Program Notes”. Well, last week — May 7, to be exact — was Johannes Brahms’s birthday (happy 192nd, Johannes!), so it seems a fitting moment to write down some thoughts I have had floating around for a while.
Brahms occupies a curious place in Western musical history. Among the pantheon of “great composers,” he is perhaps the first who was not self-consciously an innovator. (Perhaps Mozart is a counter-example? But if in his youthful period one hears nothing but an almost uncanny perfecting of the Classical style, the mature works of his final decade disclose a latent genius for musical innovation that at times nearly shatters the mold; as with Schubert, one can only imagine how different music would have been had he been given more time.) During his own lifetime Brahms was known as a notable musical conservative, a protégé of the Schumanns and an inheritor of the Beethoven legacy, in contrast to the self-consciously progressive and experimental followers of Liszt (especially Wagner) — the (hyperbolically) so-called “War of the Romantics.” There is no single genre with which he is singularly identified, which he can be said to have (re)invented, transformed, or redefined — unlike Mozart with the opera, Schubert with the song cycle, Liszt with the tone poem, Mahler with the symphony, or Beethoven with pretty much everything (but especially the symphony and the string quartet). Nor is there an instrument whose technique is distinctively and permanently marked by his influence — unlike Bach for the violin and the keyboard, Beethoven and Chopin and Liszt for the modern piano, or Mendelssohn and Wagner (quite differently from one another) for orchestration. His style pays constant homage to Bach’s finely tuned counterpoint, Haydn’s sense of proportion, Beethoven’s ear for the dramatic flair, and Schumann’s expressive melody; but it is hard to say, whatever it might mean to say it, that in any of these areas he “improves” upon his forebears.
Part of all this, no doubt, is that Brahms was a notorious perfectionist — spending nearly twenty years writing and rewriting his First Symphony, and burning the manuscripts of more than a dozen string quartets he considered inadequate. But one can equally say of “perfectionism” that it is an unwillingness to measure oneself by any standards that transcend or relativize those one is given. The First Symphony, after all, took twenty years because it had to be worthy to publish after Beethoven’s Ninth (a burden which Schumann and Mendelssohn had notably not felt). Beethoven’s sheer artistic self-belief (and self-regard) was what permitted him to dispense with the artistic conventions he inherited from Haydn and Mozart, and every great composer after Beethoven considered that to be truly great one must at least try to be like Beethoven in this respect. Every great composer, that is, except for Brahms. He alone seemed to think it worthy to simply and creatively conserve the traditions he inherited, offering to posterity a handful of finely polished gems in which, like the Silmarils of Fëanor, the light of now-past ages is caught and distinctively refracted. And a small handful indeed: in the genres which his great forebears had seen, or had come to be seen, as offering special artistic statements — the string quartet, the piano trio, the piano sonata, and above all the symphony — he left just a few pieces each: string quartets three, piano trios three, piano sonatas three, and symphonies four. If there is a genre in which he was, perhaps, the greatest “innovator” of his day, it is that genre which most self-consciously honors the past: the theme and variations.
All this may sound curiously negative, as though I am suggesting (as Richard Strauss said about himself) that Brahms is “not a first-rate composer, but a first-class second-rate composer.” Not so. Brahms, in his totality, is certainly greater than Strauss (who, as the Brits say, routinely over-eggs the pudding a bit — though that masterpiece of his twilight years, the Vier letzte Lieder, deserves to stand in the first rank). The best passages in Brahms are as transcendently great as anything in Beethoven or Mozart. I am thinking, specifically, of the last five minutes (102 bars) in the first movement of the First Piano Concerto, though there is any number of passages I could spotlight. This movement, and this passage in particular, exemplifies all the best qualities of his writing: the organic expansion of just one or two simple musical cells into a vast whole; a remarkable economy of both counterpoint (there are rarely more than two separate lines moving simultaneously) and orchestration (somehow creating a full, sustained sound without Wagnerian orchestral busy-work); judicious exploitation of the flexible, propulsive rhythms available in his long triple meter, keeping the energy flowing through long yet elegantly balanced melodic lines; and the perfectly seamless, almost invisibly prepared transition from the calm light of the second theme to the darkness of the coda (at bar 438, 22:03 in the above linked recording), like a great cloud slowly obscuring the face of the Sun. There is nothing pretentious, nothing self-serious, nothing indulgent in Brahms. Everything is heartfelt, often even passionate, but utterly sincere. Where Mozart’s music sounds effortless, almost too perfect to be real, and Beethoven’s music sounds immensely effortful, every note as if written with blood — well, Brahms’s music sounds, simply, human: the music of human life, life as really lived, not as larger than life.
Now indulge me as I offer some speculation. In his masterpiece of criticism Real Presences (1989), George Steiner draws attention to the “broken contract” between logos and kosmos, immanent language and transcendent reality: the gulf (so he argues) at the heart of modern humanity’s sense of alienation. If there is no God, there is no “real presence” in anything we say: our words are meaningless. (Steiner himself was, notably, unable to believe in God: throughout Real Presences he writes of God and the transcendent as one who longs for but cannot himself have them.) And it was in the 1870s, Steiner suggests, that European critics and scholars began to advocate for detaching logos and kosmos. I am not even one-thousandth the expert on European arts and letters that Steiner was, but I cannot help noting that in the realm of music, this is precisely the period when tonality and tradition — the so-called “Common Practice” — begins to break down. Wagner’s “Tristan chord” (premiered 1865) is often seen as the touchstone for this development: the first public statement by a leading composer in a major work that the boundaries of tonality and the “rules” of voice-leading could be breached for the sake of expression. Of course, one is not terribly hard pressed to find Tristan-chord-like harmonies and resolutions in earlier composers (Schumann!), but it is hard to deny that there is something… flagrant? iconoclastic? Promethean? in the use Wagner there makes of it. And in any case, the floodgates opened in the 1870s and onward — with Verdi, Franck, Saint-Saëns, and Mussorgsky (all born before 1850) following Wagner in preparing the ground for really major innovations by Puccini, Mahler, Debussy, and Strauss (born after 1850) within a just-barely-tonal paradigm. By the beginning of the 20th century, Ravel and Scriabin were conjuring essentially non-tonal landscapes, and Schoenberg was developing the twelve-tone paradigm by which he sought to banish the concept of a single tonal center from his music — a deliberate repudiation of kosmos in favor of (a highly mathematical and schematized notion of) logos. All that was (apparently) solid melted, in the course of a few decades, into air.
And it is in precisely those decades, in the midst of so much musical chaos, that we find Brahms at work. He is a son of his age, not of some other age; he is not, and cannot be, a mere repristinator. But he is that son not as an innovator, but as a creative conserver, aware of how rich is his inheritance and seeking to make good use of it. What we hear in Brahms is always something new, but never something novel. Perhaps this is the way — the only way? — to flourish in modernity.
Note: Hand over heart, I substantially drafted this post — including its core conceit — long before reading Josh Brake’s latest Substack post. No plagiarism here!
I have two sorts of problems with “AI” in general and Large Language Models (LLMs) in particular. One is the (infinitely ramifying) ethical problem. LLMs are built on deception. They are not human (and not “alive”), do not possess human cognitive faculties, and cannot “know” anything in the ordinary human sense of that word, and yet their model is built on — after vacuuming up an enormous amount of human-created linguistic “content” — mimicking human cognition and knowledge to such an effective degree that you spend all your time relying on GPT-4o or what have you, rather than other human beings. I take this to be a fairly straightforward form of deception, and because of the incommensurability of truth and falsehood, this first problem to be the most fundamental. What does constantly being deceived, and constantly self-deceiving, do to a human being? In what ways are we damaging, and might further damage, ourselves by using such a false tool? (See also: Mammon.) But that’s for another post.
The principles of scientific forestry [TC: planting a single “crop,” in evenly-spaced rectangular grids, in place of the old ecologically diverse forests] were applied as rigorously as was practicable to most large German forests throughout much of the nineteenth century. The Norway spruce… became the bread-and-butter tree of commercial forestry. Originally [it] was seen as a restoration crop that might revive overexploited mixed forests, but the commercial profits from the first rotation were so stunning that there was little effort to return to mixed forests… Diverse old-growth forests, about three-fourths of which were broadleaf (deciduous) species, were replaced by largely coniferous forests in which Norway spruce or Scotch pine were the dominant or often only species. In the short run, this experiment in the radical simplification of the forest to a single commodity was a resounding success… the negative biological and ultimately commercial consequences of the stripped-down forest became painfully obvious only after the second rotation of conifers had been planted… An exceptionally complex process involving soil building, nutrient uptake, and symbiotic relations among fungi, insects, mammals, and flora—which were, and still are, not entirely understood—was apparently disrupted, with serious consequences. Most of these consequences can be traced to the radical simplicity of the scientific forest. … Apparently the first rotation of Norway spruce had grown exceptionally well in large part because it was living off (or mining) the long-accumulated soil capital of the diverse old-growth forest that it had replaced. Once that capital was depleted, the steep decline in growth rates began.
To apply the analogy: Maybe, just maybe, you can implement LLMs without too many problems in the first generation, among a population of adults who have already been educated. Their values have already been formed; they have already learned to read and write and think critically. (This already concedes far too much to the “AI” boosters, but for the sake of the argument, we will not pause overlong.) Perhaps they really could achieve the stunning productivity growth which we are constantly promised (though so far the results don’t seem great!). But even if that were true, can you expect those gains in the second generation, among children who are still being educated? Or would you rather expect systemic failure to ever form values, to learn critical thinking, essential reading comprehension, and basic writing skills? The adults who received pre-LLM educations have an existing store of cognitive and intellectual capital on which to draw as they encounter and learn to use LLMs. But children who never experience education without LLMs will never have the chance to develop that capital.
Furthermore, the broader environment in which this “first rotation” is encountering LLMs is not remotely the same as that in which the “second rotation” will encounter them. Indeed, the environments are being treated as if they are the same, when they should be different. My local school district is now integrating “AI” into primary and secondary education, because “universities and employers will expect AI literacy” — what tool is easier to learn to use than a natural language chatbot? Now, the workplace may appropriately demand certain kinds of efficiency from adult workers, and LLMs may just prove their usefulness in such cases (though in my view the jury is still out). Education, by contrast, should be inefficient, frictional, resistive. The mind is like a muscle: in order to grow, it must be repeatedly stretched to the limits of its capacity. The LLM chatbot is the ultimate anti-friction, super-efficient (except in, you know, water and energy) machine, which promises that you will never encounter resistance ever again; with the new “reasoning” modules, you’ll never have to think for yourself again. The implications for education hardly need to be spelled out.
Scott continues:
As pioneers in scientific forestry, the Germans also became pioneers in recognizing and attempting to remedy many of its undesirable consequences. To this end, they invented the science of what they called “forest hygiene.” In place of hollow trees that had been home to woodpeckers, owls, and other tree-nesting birds, the foresters provided specially designed boxes. Ant colonies were artificially raised and implanted in the forest, their nests tended by local schoolchildren. Several species of spiders, which had disappeared from the monocropped forest, were reintroduced. What is striking about these endeavors is that they are attempts to work around an impoverished habitat still planted with a single species of conifers for production purposes. In this case, “restoration forestry” attempted with mixed results to create a virtual ecology, while denying its chief sustaining condition: diversity.
I leave the resonances between this virtualized ecology and the state of education today as a trivial exercise for the reader.
(Scott’s remarks here of course have many parallels. Ivan Illich makes a remarkably analogous argument, with respect to medicine, in the opening of Tools for Conviviality; and Michael Polanyi offers a structurally similar observation about the Enlightenment “critical movement” that sought to banish belief from knowledge: “its incandescence had fed on the combustion of the Christian heritage in the oxygen of Greek rationalism, and when this fuel was exhausted the critical framework itself burnt away.")
“Artificial general intelligence,” defined as “a computer able to do any cognitive task a human can do” — as envisioned for example in this new work of science fiction — is computationally impossible to achieve.
This is because “intelligence” — in the sense of “normal human intelligence,” which is presupposed by the above definition of “AGI” — is a) impossible to fully and simultaneously articulate (hereon inarticulable) and b) non-deterministic, and therefore in at least two senses strictly non-computable.
The inarticulability of intelligence has (at the very least) to do with its embodied and relational aspects. “Mind” is neither identical with nor even co-extensive with “brain activity”; rather, “mind” is (to crib from Dan Siegel’s definition) is an embodied and relational process. Emotion in particular seems, as far as the causality can be determined, to be body-first, brain-second, such that it is only articulable after the fact (and in a way that changes the emotional experience). Michael Polanyi’s great work demonstrates in a philosophical register what musicians, artists, and craftspeople have always known intuitively: that the “cognitive task” of playing an instrument or using a tool depends on integrating the instrument or tool into one’s bodily experience, in an inarticulable way. And relationship through interaction with other embodied minds is such a complex process, with so many emergent layers, that not only is it poorly theorized or modeled now, it may be impossible to exhaustively theorize or model — especially because it primarily seems to take place in and through the pre- and in-articulate dimensions of cognition.
Meanwhile, the non-determinism of intelligence has (at the very least) to do with quantum randomness effects in the brain, which at the mesoscale (the level at which daily human, and most complex organic, life takes place) emerge into relatively well-understood and predictable patterns, but at the nanoscale (the relevant level for a hypothetical deterministic model of cognition) are by definition impossible to predict, or even observe without altering them. I am unaware of any good reason to think the quantum effects in, say, an extremely large and inorganic GPU farm, would be interchangeable with or even meaningfully similar to those in a three-pound organic human neural system.
What is computationally possible, as far as I can tell, is a (relatively) high-fidelity simulation of one aspect of human cognition: the comparatively deterministic, hyper-articulated aspect of human cognition which Iain McGilchrist identifies as characteristic of the left hemisphere (hereon LH) of our brains (subject, of course, to obvious caveats from theses 2–4). Note: I am not saying, and I do not take McGilchrist to be saying, that a fully-computed model of the LH itself is possible; only that its characteristic thought-style can be simulated in high fidelity, precisely because that thought-style is comparatively deterministic and hyper-articulated.
In currently existing frontier Large Language Models (LLMs), I take it something like this has already been achieved. Commercially available LLMs are now (to use a technical term) pretty good at processing and reproducing both written and spoken natural language — albeit in such a sterile “voice” that it renders the phrase “natural language” almost meaningless — and quite good at analytically processing huge quantities of formally similar information. These are two of the characteristic specializations of LH cognition, and I expect the next generation of LLMs to be significantly better on both fronts. Notably, some of the persistent failure modes of LH cognition and of LLMs are startlingly similar: “hallucination” or fabrication of nonexistent supporting evidence, a predilection for lying or circumventing rules in order to achieve a desired result, an inability to attend to wholes at the expense of parts, and so forth.
Because much of contemporary Western life (as McGilchrist and others have extensively documented) is already organized to systematically advantage that aspect of human cognition, it is therefore no surprise or, in a sense, any remarkable accomplishment that frontier models now perform at the level of PhD students in solving advanced physics problems (albeit ones with solutions known to currently existing physics), or that some chatbots now “pass the Turing Test." This is the natural end result of reimagining science as “knowledge production” and credentialing scientists accordingly, or of technologically reducing the typical person’s normal experiences of and capacity for conversation to so great an extent that we now take what the LLMs offer to be “human” conversation. This — and all the attendant social/economic disruption (about which more below) — is all possible without “AGI” itself being computationally feasible.
The second strike against the possibility of “AGI” comes from limits in physical resources. Achievements in LLM development up to this point have been enabled by energy use, water depletion, and resource extraction on an already massive scale. The anticipated investments required for “AGI” (e.g., according to AI 2027, $2 quadrillion in new data centers over the next 10 years!!!) will require exponentially more energy, water, and mineral resources that we either simply do not have on this planet or cannot physically extract from it at the desired rate (unless we invent, say, cold fusion). This is to say nothing of the land required to build all of the new infrastructure. I therefore anticipate that “AI” development will, as a function of resource scarcity, fail to get anywhere close to the scale of investment theoretically required for “AGI.” This may only become clear to “AI” developers, however, after they have already inflicted genuinely ruinous and probably irreversible damage to the environment and to the communities that depend on it.
Considering all this, I find it probable that without ever achieving “artificial general intelligence” as imagined in science fiction, advances in “AI” over the next several years will make all but the top 1–5% of current “symbolic capitalists” functionally obsolete. This includes both high-status sectors such as consulting, finance, advertising, software development, law and legal services, etc., and lower-status (or at least lower-paying) sectors such as journalism, copywriting, teaching, administration, graphic design, the social sciences, etc. (Note that several of these lower-status professions are ones which the Internet revolution has already been destroying.) By “functionally obsolete” I mean that it will almost always be more cost-effective, and nearly as useful, to “employ” an “AI agent” for a task that previously required one to hire a human being.
Sectors that are symbolic-capitalism-adjacent but require long training in embodied skill — e.g., healthcare, the experimental sciences, mechanical engineering, war — will not be functionally obsoleted, at least not so thoroughly. An inorganic robot will never be able to perform skilled tasks in the real world with the same level of ability as a trained human being (see (3) above)… and “organic robots” capable of such skill would pretty much just be, well, lab-grown humans, with many of the same inefficiencies and time-delays as regular humans. (Only a conspiracy theorist would see current Silicon Valley investments in IVF, genetic selection and editing, and artificial wombs as an attempt to create the conditions of possibility for lab-grown humans… right???) But some current features of jobs in these sectors — the features, that is, which are most akin to “AI” core competencies — will be permanently outsourced to “AI agents.”
The “trades” and the “crafts,” on the other hand, will not become thoroughly automated, though they will be in various ways automation-directed and -augmented. Machine maintenance and repair, for instance: machine failure might be AI-diagnosable, but the intuitive skill necessary for actual repairs will remain the province of humans. To deal with water, you’ll always need a plumber. Reality has a surprising amount of detail, and fields like construction and mining will always require meaningful and skilled human attention to reckon with that detail. Agriculture represents an interesting test case: a field that is currently extremely mechanized, but as the lowest-skilled tier of human labor becomes (out of necessity) far cheaper to “buy,” one which may reabsorb much of that excess labor capacity. At the more humanistic end of the spectrum, traditional crafts might make a comeback of sorts (similar to the vinyl resurgence), and the performing arts will always be the province of human beings, though probably far fewer people will be performing artists in fifteen years than are right now; in both cases patronage will be the only economically viable model. For the ultra-wealthy, owning or sponsoring something evidently made only by humans will be a status symbol.
In sum: I believe we are headed neither for the existential-risk, civilization-ending disaster scenarios envisioned by the “AI Doomers,” nor for the golden era of peace and prosperity and universal basic income envisioned by the “AI optimists.” (Where, exactly, do the optimists think the value creation for UBI will come from in an era of mass human unemployment?) Rather, I suspect in the near-ish term we are headed for a poorer, less culturally vibrant, less highly educated world with much greater wealth inequality. This will be a world in which many more people, including some who might otherwise have been symbolic capitalists, work in various kinds of manual labor or “trades”: agriculture, mining, energy, construction, maintenance. Others will depend, one way or another, on the patronage of the few ultra-wealthy. The whole service-economy apparatus that depends on a large leisure class will be semi-permanently diminished in proportion. It might, in other words, look in certain ways remarkably like the period of transition into the Industrial Revolution.
Over the long run, I believe in the resilience of humanity, chiefly because I believe in the faithfulness of God to His wayward creatures. We will not be destroyed or superseded by a higher form of intelligence, nor will we manage to completely destroy ourselves. We are remarkably adaptable and creative: life always finds a way. But we will find that the remarkably widespread prosperity of the last few decades in particular and the last two centuries in general is not, once unlocked, a permanent and automatic feature of human existence. It has depended on our irretrievably consuming the planet’s resources at an ever-accelerating rate. What cannot go on indefinitely must eventually stop. The mechanization snake will finally eat its own tail. The only question is how soon.
Reflections on Plato’s dialogues — or, if I break it out as a separate post, links to reflections — to follow below. The order is that of the Ukemi Audiobooks series The Socratic Dialogues, which dramatizes Benjamin Jowett’s translation with a full cast of great British actors (headlined by David Rintoul as Socrates). Jowett’s translation may be “out of date” from a scholarly perspective (which I am unqualified to judge) but in Rintoul’s hands (vocal chords?) is enduringly lucid. Ukemi also organizes the dialogues loosely according to a traditional early-middle-late periodization, which I gather is a contested approach, but it doesn’t seem to harm the understanding for a first pass. (I’m already suspecting that the “dramatic ordering,” following the chronology of Socrates' life as best that may be reconstructed, might be more fruitful… but that’s for a second round, and I’m just beginning the first!)
Early Period
Apology. A barnstormer to start in medias res — better, near the end of things. We meet Socrates for the first time as he defends himself, before the assembly, against the charges laid at his door: of being an evildoer and “making the better appear the worse,” of being an atheist and introducing new deities, and of corrupting the youth. He does not succeed, though he is condemned by only a small margin. Socrates here introduces a number of key motives in the corpus: his claim to “know nothing at all” and thus to only be the “wisest” by exposing everyone else’s ignorance (which makes him quite unpopular); the deceptiveness of rhetoricians, who know how to speak elegantly and persuasively, but know really nothing of the Good and therefore of how to make men better; his own role as a sort of “gadfly,” provoking the polis into active self-reflection which it might otherwise neglect, and seeking thus to improve it; the absolute priority of caring for the soul over against all other cares (of property, wealth, body, etc.), and the absolute refusal to employ any tactics unworthy of the soul; the “daemon” or voice of God — or Conscience — speaking to him and infallibly guiding him toward the right course of action, though all public opinion be against him; his real indifference — perhaps, even here, optimism! — in the face of death, but absolute service to the truth. We also get a taste of the dialectic style as he cross-examines his accuser Meletus. It is an extraordinary bit of writing by Plato, moving and sweeping and incisive. Apology thus introduces and crystallizes the brilliant literary paradox of the Socratic corpus: Socrates disclaims all “rhetoric” and “elaborate defence,” portraying himself as a humble and artless seeker of wisdom — using brilliant rhetoric and elaborate defensive strategies to demolish his opponents' arguments. I loved Apology, and expect to revisit it with great enjoyment, but there is undoubtedly something inhuman and irritating (gadfly-like!) about Socrates. One understands instantly why Socrates had so many admirers in his own day (including Plato), and why Plato’s Socrates has been such a titanic figure in the history of thought and culture; and, equally, just why Socrates made so many enemies. Most of all I chafe at his claim that “the life which is unexamined is not worth living.” Is it not the other way round: no life which is lived is worth leaving unexamined?
Crito. A simple but moving dialogue, set in prison on the night before Socrates' execution, on the question: “Is it right to disobey an unjust law?” Socrates' answer in this case, of course, is No. The titular Crito (also mentioned in Apology) comes to him in prison and makes one last effort to persuade Socrates to escape his condemnation. But — despite his complaint in Apology that his trial was not conducted with full propriety — Socrates is determined to accept the death penalty meted out by the state. The most curious, and seemingly central, feature of the dialogue is the lengthy portion spoken by Socrates in the voice of the personified Laws of Athens. How, the Laws ask Socrates (and thus Socrates asks Crito), can one who is so personally committed to justice defy the demands and decisions of justice?
Charmides. Now we flash back several decades, and get going with our first, though assuredly not last, “What is X?” The X in question is the virtue of temperance.
Laches.
Lysis.
Euthyphro. “What is piety?”
Menexenus. A parody of the funeral oration genre, in which the ostensible praise of Athens and of great Athenian heroes turns out to just yield a series of digressions, backhanded criticisms, and trite aphorisms.
Ion.
Gorgias. Is it okay to really rather dislike this dialogue? It is long, repetitive, and occasionally mean-spirited. The subject matter is of great importance, of course: moving from the more specific question “what, if anything, does a teacher of rhetoric need to know about goodness?” to the general question “what is the best way of life?”. Yet in these early dialogues Plato does not often set up Socrates' interlocutors as particularly compelling or thoughtful — see Euthyphro or Ion and their namesakes — but in Gorgias he seems to regard, and Socrates seems to treat, all three of Gorgias, Polus, and Callicles with barely-disguised contempt. And they are, in differing ways, worthy of contempt (less so, perhaps, Gorgias).
Protagoras.
Meno.
Euthydemus. A merciless satire on sophistry. At first Socrates is baffled, then infuriated, then bemused, then amused by the “method of contradiction” employed by the brother sophists Euthydemus and Dionysodorus; finally he pulls himself together and shows himself a master at it, if he chooses. There is a substantive philosophical point lurking within the mockery, though. Euthydemus and Dionysodorus are boxers who have but lately taken up sophistry (in order to make money and increase their reputation). They have grasped that the key to successful sophistical argumentation is equivocation: exploiting multiple meanings of their opponents' words in order to catch them in apparent contradictions. Of course, as soon as one scrutinizes their arguments, these fall to pieces, so the sophist must keep his opponent permanently off balance and give him no room to strike back. Sophistry is dialectic reduced to boxing: a contest of strength and speed in which one hit is as good as another. The true philosopher, however, cares not at all for victory, but only for the pursuit of truth. And truth requires valid arguments, clear and consistent definitions, and patient exploration.
Lesser Hippias.
Greater Hippias.
Middle Period
Symposium.
Theætetus. Fantastic. Far and away the most enjoyable, dare I say riveting, of the dialogues so far. “What is knowledge?” Must revisit and write a longer reflection.
Phædo.
Phædrus.
Cratylus. Some people, apparently, say this dialogue is “tedious.” I had the exact opposite reaction! (Perhaps I am a tedious person…) Admittedly, for the first two thirds, I repeatedly thought, “Surely you can’t be serious!”, as Socrates offered increasingly speculative and unsupportable folk etymologies for all sorts of words (though the more abstract a concept denoted by a word, the less speculative it seemed to me) to supposedly show that the relation between a word and the thing it represents is not arbitrary or merely conventional, but is based on nature… only to experience philosophical whiplash in the final third as Socrates dismisses that linguistic theory and argues that words are given by convention and have no necessary naturalistic aspect!
Parmenides. This one is fascinating, and demands revisiting. A precocious, but philosophically underdeveloped, nineteen-year-old Socrates meets Zeno (he of the Paradoxes) and the famous Parmenides. Socrates knows the teaching of the great Heraclitus that all things are in constant flux and motion (“You cannot step into the same river twice”): the One is an illusion, the Many is all. Parmenides and Zeno, on the contrary, propose to show that eternal reality is unchanging and flux is impossible: the Many is an illusion, the One is all. Socrates, mock-naïvely, proposes a synthesis: all earthly things are indeed in perpetual flux, but they derive their thing-ness from participating in eternal unchanging Forms or Ideas. Parmenides, somewhat unexpectedly, dismantles this proposal with six increasingly devastating counter-arguments, exposing all sorts of internal contradictions, absurdities, infinite regresses, and the like. But then… Parmenides flips the script and sets out to show, in tremendous (and occasionally mind-numbing) specificity, how one might after all defend a theory of Ideas as logically coherent. Does he succeed? Can the One and the Many be held together? What is the real point of the deductions? It’s hard to say. I must reread it, and write a longer reflection.
Republic. Fascinating, riveting, eye-opening: “oh, that’s where that comes from!” a million times. Must revisit. Must write a longer reflection.
Late Period
Timæus. Whatever the opposite of riveting is; I really, really struggled for motivation to keep listening to this one. I know it’s one of the most influential texts in the history of Europe, but even with the capable David Timson reading the part of the eponymous monologist, I found my attention slipping over and over again.
Critias. It’s Númenor! Or, really, Númenor is Atlantis: “But even the name of that land perished, and Men spoke thereafter not of Elenna, nor of Andor the Gift that was taken away, nor of Númenórë on the confines of the world; but the exiles on the shores of the sea, if they turned toward the West in the desires of their hearts, spoke of Mar-nu-Falmar that was whelmed in the waves, Akallabêth the Downfallen, Atalantë in the Eldarin tongue.” More seriously, we do get hints — reminiscent of Republic (which takes place, dramatically, just the previous day) — at the Platonic ideal for a political constitution.
Sophist. The follow-up to Theætetus is not quite as much fun, though it introduces a fun new hermeneutical device: most of the philosophical exposition is not in the mouth of Socrates, who is a mere spectator, but spoken by a nameless Stranger from Elea (home of Parmenides and Zeno). The bulk of the dialogue consists in the search for a single definition via numerous “divisions” and “classes” — much more similar in some ways to Parmenides (to which it makes reference) than to its ostensible precursor. And of course the sophist as a figure is an unflattering subject. It’s quite interesting, however, after hearing Plato decidedly privilege the One over the Many in Republic, to hear some… back-pedaling, maybe? Perhaps the One and the Many can be held together after all. Dramatically speaking, Parmenides is set at the very outset of Socrates' philosophical career, whereas Theætetus, Sophist, and Statesman are said to take place at nearly the end of his life.
Statesman. A direct continuation from Sophist, though Socrates takes over from Theætetus as the Eleatic Stranger’s primary interlocutor.
Philebus. At one point near the three-quarters mark of this dialogue, Protarchus, who is Socrates’ principal interlocutor, remarks to the philosopher, “Your many repetitions make me slow to understand.” Socrates responds, infuriatingly, “As the argument proceeds, my boy, I dare say that the meaning will become clearer. Protarchus’ dry response, “Very likely,” sums up my experience of this dialogue. Here is an undoubtedly sophisticated, mature, exacting reflection on a classic Socratic-Platonic theme — the superiority of a life spent seeking wisdom to a life spent seeking out pleasure — whose intelligibility is compromised by its repetitiveness. The argument is just difficult to follow. Socrates multiplies distinctions, which no doubt are useful, in service of the general thesis that the enjoyment of pleasure (and its coordinate, the avoidance of pain — though how, precisely, they are coordinated is one of the many subjects of discussion) is not the highest good in life, but rather a faculty like any other, which admits of distortions and falsities, and which therefore cannot be the highest good of a human life. Here there are none of the dramatic fireworks of the earlier Gorgias which touches on similar themes (and which is referenced occasionally). It was, however, worth listening to this dialogue just for the hilarious aside near the beginning in which Socrates describes those young men who are first intellectually thrilled by the paradoxes of One and Many (15e—16d); not much about Philosophy Bros has changed, it seems, in at least 2400 years.
Laws.
One recurrent theme throughout Plato’s work, increasingly prominent in the later dialogues (though I recall it as early as Euthydemus), is the challenge posed for his theory of knowledge by falsehood or false knowledge. The problem goes something like as follows. Everyone agrees that there are things called falsehoods which we can utter. Yet, logically speaking, this should not be possible. After all, we speak using words; the meaningfulness of words depends on their signifying things that really have existence; there are no words to speak of non-existence; therefore, we can never speak of that which does not exist; so also we can never speak falsely but can only speak the truth. Similarly, we can never know anything false, but always and only know things that are true; our difficulties come not from false knowledge, which is strictly speaking a contradiction in terms, but from ignorance alone. The argument sounds persuasive when considered abstractly, yet it yields an obviously ludicrous conclusion! It receives its most extended treatment, if I recall correctly, in Sophist, where the Eleatic Stranger explores the problems raised by the term “non-being”. What does the term “non-being” actually indicate?
There is something here formally similar to — and no doubt influential upon — the evidently unsolvable (in the technical sense, absurd) problem of evil in the Christian tradition. God, Who created all things, is (on the classical-theistic view) perfectly good, perfectly knowledgeable, and perfectly capable. He must therefore have created all things perfectly. Furthermore, as He is (by definition) the unique Creator, no creature can contravene His created design or overrule His will if it wanted to. So where does evil come from? For it is evident to all that something has gone horribly wrong. Does it come from some kind of deliberate possibility for evil which He gave to His creatures as part of their creation? If so, how is He not the creator of evil also? But if that is the case, how can He be perfectly good? For that matter, how would a perfectly good Creator be able to conceptualize the possibility of evil so as to deliberately create it? The limitless perfections of classical theism seem to be in tension. But the alternatives are even less appealing. If evil is somehow inherent in the nature of creatureliness, such that anything with any limitations at all has not only a potentiality for but an actuality in evil, then either “evil” is a fundamentally relativized category with no real purchase, or it might be better to never have been created at all. Or if the Creator is limited in any of His moral goodness, knowledge, or capacity, one must suppose that evil might be able to permanently and ultimately gain the upper hand over Him and His creatures. One could fall back on saying that evil cannot exist, because it is a logical impossibility with no satisfactory explanation — yet we have a strong and near-universal intuition that it does exist.
With the beginning of this year, I have determined to patch some of the (very large) holes in my reading of the classics. I have never read Plato or Aristotle in any sort of panoptic way, let alone later major philosophers of antiquity such as Seneca or Plotinus; my reading of the Church Fathers has been almost entirely occasional and extremely selective; it has been years since I have read either the Iliad or the Odyssey (and I have in fact never read the Aeneid). My major reading for roughly the last two years has instead focused on the characteristic novelties and problems of modernity, as articulated by modern writers: George Steiner’s Real Presences, James C. Scott’s Seeing Like a State, Lorraine Daston’s Rules, Michael Polanyi’s Personal Knowledge, Jason Josephson-Storm’s The Myth of Disenchantment, Erazim Kohák’s The Embers and the Stars, and Alasdair MacIntyre’s After Virtue and Three Rival Versions of Moral Enquiry; in a more explicitly scriptural/theological key, my teacher Jeremy Begbie’s Abundantly More, my teacher Kavin Rowe’s essays on New Testament hermeneutics, Brevard Childs' Biblical Theology of the Old and New Testaments, Albert Schweitzer’s The Quest of the Historical Jesus, Ephraim Radner’s Time and the Word, and Andrew Louth’s Discerning the Mystery; and, of course, the granddaddy of them all (by at least volume if not temporality), Iain McGilchrist’s The Master and His Emissary and The Matter With Things.
If your guiding intellectual question is “how shall we live with integrity as Christians in modernity?”, as I am beginning to suspect mine is, this body of literature possesses obvious importance. I am nowhere close to having plumbed the full depths of this tradition (or complex of traditions), and do not intend to stop reading in this area. My reading project on the nature of tradition will bring me back up to the present age with (at least) Gadamer, Lindbeck, and more MacIntyre, and I have several more major works of twentieth and twenty-first-century philosophy and theology already waiting for me on my shelves (Heidegger, Cassirer, Adorno & Horkheimer, Bultmann, Frei, Jenson, Rosa, and so forth). And I’m currently reading through David H. Kelsey’s Eccentric Existence, which (whatever else, good or ill, I might say about it) represents a one-man (two-volume) masterclass in theological engagement with modernity. So in no way am I withdrawing my attention from modernity. Rather, two things have crystallized my sense that it is time to turn (at least more of) my attention to the Old Things.
The first is that I have found myself increasingly overpowered by what I call in shorthand the “I do not understand Hegel” problem. The great theologians and philosophers of the not-too-distant past — and, still, the greatest in the present — were staggeringly, now almost incomprehensibly, literate and erudite figures. Before publishing his great work on hermeneutics, Gadamer was a noted expert on the pre-Socratics. Karl Barth is sometimes accused of not having read the tradition fairly, but he has never been accused of not having read it thoroughly. Brevard Childs seems to have truly read every book ever written. Part of what makes Hegel singularly difficult is, of course, his ruthlessly abstract and intensely tedious style; but no doubt another part is that very few people today are educated the way that he and his peers were. Take a slightly more recent example: what man of letters teaching at the University of Michigan today would dare assign his undergraduate students a reading list like W. H. Auden’s? If philosophy and theology are the Great Conversation, one must learn to discern and hear the enduring presence of the older voices who have left the room before one can truly contribute or at least understand.
The second is that, despite the immensity of my to-read list and the paucity of my already-read list, I do feel that I reached an inflection point with the turning of the year. That was when I finished reading Karl Barth’s Protestant Theology in the Nineteenth Century — the bulk of which is actually about eighteenth-century philosophy and theology as the “background” to nineteenth-century theology; and it must be said that Barth appears to enjoy writing about Rousseau, Kant, Hegel, and so forth a great deal more than the nineteenth-century theologians who are the book’s ostensible subject — and an unofficial trilogy by Lesslie Newbigin: Proper Confidence, Foolishness to the Greeks, and The Gospel in a Pluralist Society. These, somehow, coordinate in my mind: Barth and Newbigin (who was, not coincidentally, heavily influenced by Barth) together outline the negative space for and sketch the positive content of the properly Christian post-liberal synthesis which we desperately need — or which, at any rate, I need in order to feel intellectually satisfied. In the coming months, as the intellectual dust from my aforementioned reading settles, I may take a few stabs at describing what seem the chief features of that synthesis. But I also sense, if dimly, that in order to know what I really mean by those features, I will need some more pre-modern context and contrast. I can thus leave Barth and Newbigin for a little while, confident that I will return to them better able to understand what is fruitful in what they offer.
It is high time, then, that I actually read Plato and Aristotle (not to mention Seneca and Plotinus); that I (begin to) read through the Church Fathers; that I revisit Homer (and meet Vergil anew). I am doing so as follows. For Plato, I have launched into the Ukemi Audio series dramatizing the Socratic dialogues (in Benjamin Jowett’s translation), with the astounding David Rintoul as an unforgettable Socrates — and intend to write here, for my own benefit, at least a short reflection on each dialogue. For the Fathers, the obvious place to start is Volume I of the old Schaff set, with Sts. Ignatius, Justin, Irenaeus, and their comrades. With the Iliad, which I have at least read before (perhaps more than a decade ago), I have cracked open Emily Wilson’s recent translation. In none of these cases is the point a deep, doctoral-seminar level understanding. Rather, the point is familiarity, breadth, and fresh inspiration: to drink deep from the old and honored wells.
Those who aim at what is beyond their powers, and thus run the risk of falling into error, who waste their real capacity in order to acquire some capacity that is illusory, are also men of curiosity in the olden sense… Do not overload the foundation, do not carry the building higher than the base permits, or build at all before the base is secure: otherwise the whole structure is likely to collapse. What are you? What point have you reached? What intellectual substructure have you to offer? These are the things that must wisely determine your undertaking. “If you want to see things grow big, plant small,” say the foresters; and that is, in other words, St. Thomas’s advice. The wise man begins at the beginning, and does not take a second step until he has made sure of the first. That is why self-taught men have so many weak points. They cannot, all by themselves, begin at the beginning.
— A. G. Sertillanges, O.P., The Intellectual Life: Its Spirit, Conditions, Methods (tr. Mary Ryan), 27.
Goal for the next stage of my intellectual life: Answer his questions. Begin again from the beginning.
In turning away from Hegel the [nineteenth century] acknowledged that, having reached the summit of its desires and achievements, it was dissatisfied with itself, that this was after all not what it had intended. It set Hegel aside and tried again, but did not even reach such a peak a second time, and thus manifestly it was bound to be even less satisfied than it was before, although it pretended to be. Where does the fault lie? In Hegel? Those who study him will not receive this impression. If it is a question of doing what the entire nineteenth century evidently wanted to do, then Hegel apparently did it as well as it could possibly be done. Or is the reason that afterwards the age of the great men was past, that there was no genius present in the second half of the century to carry out the better things which the century it seems had in mind in turning away from Hegel? But it is always a bad sign when people can find nothing to say but that unfortunately the right people were lacking. This should be said either always or never. Every age, perhaps, has the great men it deserves, and does not have those it does not deserve. The question only remains, whether it was a hidden flaw in the will of the age itself, perfect as the expression was that it had found in Hegel, which was the reason why it could not find any satisfaction in Hegel and therefore not in itself, and yet could not find any way of improving upon and surpassing Hegel, and therefore itself. It might of course be possible that Hegelianism indeed represented in classic form the concern of the nineteenth century, but precisely as such came to reveal the limited nature of this concern, and the fact that it was impossible to proceed from it to the settlement of every other question of truth. And that for that reason it was, curiously, condemned.
— Karl Barth, Protestant Theology in the Nineteenth Century, 374. The whole lecture is an absolute tour de force: elucidating both what, for both philosophers and theologians, makes Hegel such an immensely attractive option — and why Hegel, taken on his own terms (like nineteenth-century thought as a whole), ultimately represents a cul de sac for those disciplines.
Till We Have Faces is set in a pagan, pre-Christian world: the world of a “barbarian” Balkan kingdom (“Glome”) well to the north of Greece, where pagan religion still holds sway but Greek philosophy, far to the south, has already come to a kind of initial maturity. (During Orual’s reign, Glome acquires a copy of a “long and difficult book” that begins with the line All men by nature desire knowledge; this is Aristotle’s Metaphysics, which is a clever tip of Lewis’s hand that the tale is set sometime in the two or three centuries before the birth of Christ.) The Greek slave and expatriate Lysias (“the Fox”) gives Orual a thorough schooling in the Greek philosophical religion, a narrative device by which Lewis can explore the encounter between the pagan and philosophical views of the gods (or the Divine Nature, as the Fox would prefer to say).
Orual is drawn to, and sometimes even momentarily persuaded, by the simplistic, naturalistic clarity of the Fox’s philosophy: the gods of the heathens are no gods at all; the Divine Nature is not a thing with passions that must be appeased, but is pure goodness and light; Nature is a great interconnected web of causation, and there is neither “chance” nor the direct intervention of the gods in the natural order (despite the Fox’s periodic thanksgivings to “Zeus the Saviour”), so that things only happen which are “in accordance with Nature”; pagan religion is, more or less, a populist fig leaf for the temple’s political machinations vis-a-vis the palace. This philosophy is particularly useful to her when she becomes queen and shuts her Orual-self up in a small locked space within; it offers her tools for the task of good governance (at which, the concluding lines of the book attest, she succeeds spectacularly), and seems to provide rationalization for suppressing her pangs of conscience.
Nevertheless, neither does Orual abandon the local pagan religion: it has its own political usefulness, of course, in binding together (the root meaning of “religion”) people and palace, and she must be seen participating in the Ungit-cult; more than that, it provides a sense of holiness as something visceral and awe-inspiring, which the philosophical religion never does, associated with blood and incense and darkness (“holy places are dark places”); the “sacred stories” are of course full of “contradictions” and implausibilities, but they give paradoxical voice to deep human instincts and resonances with the natural world — so the (bastardized) story of Psyche becomes a figure of the cycle of the seasons and the cult thereof; without the shedding of blood there is no expiation of sins. And, of course, Orual does — once, when it is too late — see the god of the mountain, and she never forgets the experience (though she does her level best, so she tells herself, to minimize its power over her) of that momentary encounter with a Being of another order.
The early drama in which the Priest schemes to have Psyche sacrificed is a perfect example of this. The Priest recognizes, with awful intuition, the need for a “scapegoat” (in the Girardian sense) — one must die on behalf of the many for the whole people to be saved; the power of the paradox that, “in a mystery,” this scapegoat must be both the best that the land has to offer and the worst offender against the gods; the further paradox that — depending on the sex of the Blessed / Accursed — the god to whom the victim is offered may be either Ungit (the Awful Feminine) or Ungit’s son (the masculine god of the mountain), a thing of glorious majesty, and that either may simultaneously be the Shadowbrute, a thing of dark terror; the mystery that being the meal of the Brute and the wife of the God may not be all that different. The Fox has no time for any of this, and attempts to pick apart the seemingly incoherent strands of the mystery. But the Priest recognizes the deep coherence of the mysteries, the coincidentia oppositorum at the heart of reality, and so conquers the Fox: “Why not?”
One distinctive difference between the pagan religion and the philosophical religion, therefore: the pagan religion inspires faith. Even a shrewd political operator like the Old Priest, who understands perfectly well what he is doing vis-a-vis the power of the palace, nevertheless has absolute confidence (pistis) that what he believes and teaches is true. The Fox, on the other hand, has dissolved his own faith in the gods through philosophical criticism, but as a result no longer has the certainty which he needs to counter the power of the Priest. So, when the old King goes to offer Psyche in sacrifice, he finds that he genuinely believes, if only for the time he is making the Offering. The old pagan Ungit-stone, a dark and shapeless thing, inspires more faith and evokes more holiness than the new Hellenistic Aphrodite-statue, which is recognizably human. (As the peasant woman says to Orual on the day of the Birth-feast: “‘That other, the Greek Ungit, she wouldn’t understand my speech. She’s only for nobles and learned men. There’s no comfort in her.’”) It is narratively ambiguous, I think, whether the Great Offering is in fact efficacious to rescue Glome from its troubles; the Fox of course dismisses it according to his exclusively materialist view of causation (“everything would have had to be different from the beginning”), but it is hard for Orual to avoid the impression that the gods have indeed looked with favor on the sacrifice and sent Glome rain, peace, and good fortune.
The result of this exploration is a kind of negative-space outline for a genuinely Christological — which is to say, incarnational — view of Divinity. The Fox is right in his own way: there really is an absolute bifurcation, an “infinite qualitative distinction” between the immanent and the transcendent, and the gods are not at all like men. That is part of the power of Orual’s experience on the mountain after Psyche has uncovered her lamp. And yet — holiness is found in darkness and paradox and mystery before it is seen in clarity and light. The transcendent hides itself within the immanent, so that it may be sought by Faith, not merely apprehended by Reason. Psyche is not the slave of a mountain outlaw, but is — or was — the wife of the god. Neither paganism nor philosophy can be simply transmuted, for Lewis, into genuine Christianity; but neither can the genuine insights of either into the nature of things and human experience be dismissed as accidental — and if one believes in an absolutely transcendent Creator who has enfleshed Himself, immanentized Himself, one indeed cannot. Both the pagan religion of Glome and the Greek philosophical religion are, in their own ways and precisely in their interaction, praeparatio evangelica.
Piranesi:
“I realised that the search for the Knowledge has encouraged us to think of the House as if it were a sort of riddle to be unravelled, a text to be interpreted, and that if ever we discover the Knowledge, then it will be as if the Value has been wrested from the House and all that remains will be mere scenery. The sight of the One-Hundred-and-Ninety-Second Western Hall in the Moonlight made me see how ridiculous that is. The House is valuable because it is the House. It is enough in and of itself. It is not the means to an end.” (60–61)
“The World feels Complete and Whole, and I, its Child, fit into it seamlessly. Nowhere is there any disjuncture where I ought to remember something but do not, where I ought to understand something but do not. The only part of my existence in which I experience any sense of fragmentation is in that last strange conversation with the Other.” (71)
Evangelical theology is trapped in a perpetual struggle between its two uneasily coexisting traditions: biblical theology and systematic theology. The dispute is always the same. It never ceases, never disappears, never makes real progress on genuinely reconciling the traditions, but continues forever. The players come and go, the ostensible matter of controversy shifts, but the arguments never change. This is happening, in one form, right now with John Mark Comer and the New Calvinists; it happened in the last decade with the debate over the Gospel between “Team King Jesus” and “Team Gospel Coalition”; it happened in the decade before that with N. T. Wright and John Piper on justification (funny how the New Calvinists keep popping up here!); and so forth ad infinitum. Squint a bit, and even the early stages of the Reformation outline the same form of controversy: Luther the doctor of Old Testament, Zwingli the advocate of expository preaching, and so forth for the “Bible” side, and Eck, Cajetan, various Popes, et al for the “theology” side. (My personal favorite example of this is the pair of books published by IVP Academic a few years ago, authored respectively by Hans Boersma and Scot McKnight: Five Things Theologians Wish Biblical Scholars Knew and Five Things Biblical Scholars Wish Theologians Knew.)
Here is the general form of the controversy. Note that whenever it wells up and spills over, it can do so under the impulse of either tradition, but really identifying such responsibility is difficult; it is just one perpetual-motion controversy, and so the whole thing (at least when viewed as neutrally as possible) is a chicken and egg problem. However, let us suppose it is (re)triggered by the Bible side:
A theologian specializing in Biblical interpretation (which is all that a “biblical scholar” really is) publishes some argument, taking as his (and it is, as we know, usually a he) point of polemical departure some commonly taken-for-granted bit of doctrina, especially as it is popularly preached rather than scholastically described: for example (to pick, almost at random, from N. T. Wright), the gospel is about you “getting saved” so that you will “go to heaven when you die.” This bit of (again, popularly expressed) teaching is then found to be a remarkably inadequate representation of the biblical texts usually adduced to support it: so John 3, Romans 3–8, Revelation 21–22, and so forth actually testify that “salvation” and “eternal life” have a present dimension and reference, and the future hope is primarily for heaven “coming down to earth,” not us escaping earth and going “up to heaven”: not “life after death” so much as, in Wright’s (brilliant) phrase, “life after life after death.” Often the popular misrepresentation is straightforwardly taken to be the responsibility of some major, and beloved, historical-theological figure in the tradition: Augustine, Luther, and Calvin are popular choices here. (Sometimes it is not the (re)originator of the controversy who does this, but some less-cautious disciple.)
These warning shots arouse the systematic theologians from their dogmatic slumbers (noodling away over the finer points of Jonathan Edwards' doctrine of the beatific vision, or Kuyper’s theology of church offices, or whatever), and they determine to return fire. The more historically minded pursue lines of historical critique: the representation of Augustine (or whomever) is in fact a misrepresentation, and Augustine was far more careful than he is generally criticized as being. What we most need today, in fact, is not less Augustinianism, but more Augustinian Augustinianism! Or: the biblical theologian is simply and naïvely repristinating a historical error (e.g., the Hellenization thesis) which has been weighed, measured, and found wanting. The more philosophically minded, similarly, take the concepts deployed (again, simply and naïvely) by the biblical theologian and subject them to philosophical-theological critique: this is (or depends upon) univocity repristinated, or Social Trinitarianism uncritically retrieved, or Socinianism resurgent. Sometimes this sort of thing has the genuinely salutary effect of bringing the various parties' philosophical and theological presuppositions directly into view. Often it reads more like an attempt to overwhelm the opponent with force of Weighty Words.
Now the biblical theologians sharpen their exegetical tools to reply. There are a number of forking paths here, but they mostly consist of the same basic move: Sure, they say, you may be right about what Augustine said: but was Augustine right about what the Bible said? The systematicians are far too concerned with the neatness of their systems, far too quick to find dogmatic concepts — which took centuries to develop — in the text of the Bible itself. Or they are far too quick to occlude (here enters a historical-theological presupposition) what was imaginable, and therefore mean-able, to the author of a particular book in favor of the Church’s later consensus about what that book must really have meant: the conceptual equivalent of “illegitimate totality transfer” in semantics. This is typically where, in New Testament, references to “Second Temple Jewish” and, in Old Testament, references to “Bronze Age Israelite” thought occur: no Second Temple Jewish reader had such and such a conceptual category as to have been able to comprehend what Augustine later argued, and likewise Augustine had lost some key conceptual categories possessed by a Second Temple Jew. You know, the Hellenization thesis may be discredited in certain areas, but come on, you’re really telling me that by transposing the Biblical subject matter into the language of neo-Platonism there was not an iota, not a jot that passed from the Law’s original meaning? Are we even evangelicals anymore (rather than — horror of horrors! — Roman Catholics) if we are willing to prioritize a later theological development over what the Bible says?
The systematicians, of course, cannot abide this sort of suggestion. Naïve (you keep using that word) historicism! is the charge flung at the biblical theologians. You are operating from theological presuppositions just as much as we are, but the difference is a) you don’t know what yours are, whereas we do, and b) yours are wrong. Sometimes there is a historical doubling down, a sort of fighting the historicizing fire with fire: Don’t you know that your same argument about this same text was made in, say, the third century by [checks notes] Paul of Samosata? To reject Paulianist heresy, we must also reject your argument. Or: You have, damningly, overlooked a most critical distinction made in the 17th century by Francis Turretin — which convincingly vindicates our interpretation, and demolishes yours. The more thoughtful and careful systematicians, at this point, are actually usually willing to own that yes, they are willing to prioritize a later theological development (though of course for evangelicals it is that of, say, Martin Luther and not the Council of Trent, for… reasons!) because they believe it more effectively preserves some essential truth taught in the Bible — or which itself must be preserved to in turn preserve some essential truth taught in the Bible.
And so on, and so forth, unto the ages of ages. Eventually an individual controversy will run out of steam and settle back down under the surface. But never for long. All this has happened before, and it will all happen again.
This process — which I describe above with great love for both sides, and with tongue firmly in cheek — is a kind of dialectical expression of the basic aporia of the evangelical tradition. Belonging myself, however uneasily, to a stream of that tradition, I believe and affirm unhesitatingly every word of what follows in this paragraph, and thus belong to the realm and feel the force of the aporia. The Bible possesses a unique and singular authority, an authority distinct from and superior to any human tradition. What it speaks to us shares fully in the eternal authority of the Triune God, of Whom it testifies singularly and authoritatively and Who is singularly and authoritatively God (the Shema means more, but not less, than this). It is therefore of supreme importance to understand and obey what it is speaking. However, there is no non-traditioned, perfectly rational position from which any human can interpret the totality of what it is speaking. Add to this that the content and message of the tradition, as we now express it, is derivative from but not identical to the content and message of the Bible: it is, unavoidably, at a minimum that content and message — which was originally imparted in one moment of history — interpreted and therefore translated into a new moment of history. This renders its traditioned re-presentation remarkably contingent when viewed historically, even as such tradition is simultaneously inescapable and necessary. It is only the (theological) confession of Divine Providence which guards for us this sheer contingency from tipping into simple invalidity.
Thus, the Bible’s authority seems to be not just an article of faith but the greatest article of faith, the article of faith on which all other articles of faith depend — but simultaneously the more it becomes an article of faith, the less contact it seems to have with not only reality as historically experienced but also its own text and matter. Thence the divide between biblical and systematic theologians. The biblical theologians protest when the systematicians take the text of the Bible beyond what it presents itself to us as being; the systematic theologians protest when the biblicists set the Bible over against the articles of faith which depend upon it, which it has generated, and are in turn what we live. This dynamic is constantly re-presenting itself at the level of the matter under controversy. Take the doctrine of God. The more that, for instance, under the influence of philosophical criticism, God becomes absolutely transcendent, unqualifiedly impassible, and so forth, the less contact this God-concept seems to have with the God represented in the narratives of Scripture, which naturally invites rebuke — but equally a God-concept simply transposed out of the narratives of Scripture invites this philosophical criticism: if God were not absolutely transcendent and unqualifiedly impassible, could the sorts of exalted things Scripture says (and we are invited to say) about His faithfulness and justice and so on really be maintained?
“As ministers,” Barth remarks in one of his great early essays, “we ought to speak of God. We are human, however, and so cannot speak of God.” Put differently: we must re-present the Bible, but can we — and may we? Everyone wants to live “the religion of the Bible,” but nobody can live “the religion of the Bible” in the strictest sense of the word, because the Bible does not so much present as generate a “religion” which is both greater and lesser than itself. Nobody wants to “go beyond what is written” — but nobody can truly “not go beyond what is written,” because as soon as one asks the question “what is written?” it inevitably comes coupled with the question “how do you read it?” Both parties in the debates are permanently trapped in this dialectic. Everyone involved knows all this, at a more or less tacit level. The debates are almost entered into with a sigh of dismayed recognition, as a performance that must be undertaken yet whose non-outcome is fully known and expected. At times they seem to be an exercise in deflecting our attention from this basic aporia: like the head of Medusa, it cannot be looked at directly, hence it turn us to stone (or, yet worse, to Rome). No new Aquinas or Calvin or Barth has come along, someone who can embody both traditions so persuasively and definitively as to reconcile them and generate a new synthetic tradition of evangelical theology. Is such a reconciliation possible? Where could such a figure come from? Who is sufficient for these things?
And how, then, shall we live? For we must, we cannot but, go on with living even as we theologize, and if our theology — in all its detail and in its grand sweep — has nothing really to do with our living (if, that is, such a thing is even possible) then it is a grand experiment in foolishness, in “wise words taught by mere human wisdom.” The controversy wells up again, and again, and again because all parties recognize that in it the form of our life before God is somehow at stake. There is a way (that is, The Way) and it must be walked in. I am tempted to conclude here on a note of despair for the insolubility of this problem, and yet I cannot despair entirely. For, low and gentle, yet firm, I hear again the voice of The Way, cutting through the noise of the controversies and of my own mind, speaking the simplest words of all, inviting, beckoning, pleading: “Come to Me, all you who labor and are heavy laden, and I will give you rest.”
For this reason I do not and cannot ultimately choose a “side” in these theological controversies. Rather, wherever I encounter them — on either side — I will tend to throw in my lot with those who seek to speak and live the words of The Way after Him. I will trust in His words — His Word — to me, because there is no deeper metaphysical or ontological substrate than this trust. That is why any of us have ended up in these controversies to begin with, after all: Before we ever wrestled with the concept of history, or the hermeneutics of Biblical narrative, or the concept of God, we heard the Voice of the Way and found ourselves irresistibly drawn towards Him, found ourselves convinced that He is the Truth and the Life, came to know Him as the pearl of great value to have which it is worth selling all. And that is where we will still be after the controversies cease, when we will see no longer as in a mirror dimly but face to face.
Here I would like to advance the admittedly speculative hypothesis that the peculiar quality of music lies in its ability to produce a highly specific form of relating to the world, one in which our relationship to the world as a whole becomes tangible and thus can be both modulated and modified. Music in a way negotiates the quality of relation itself, whereas languages and sign systems can only ever thematize one particular relationship to or segment of the world at a time… [Listening] to music has a different orientation than seeing, grasping, or feeling. The experience of music suspends the division between self and world, transforming it in a way into a pure relationship. Music is the rhythms, sounds, melodies, and tones between self and world, even if these of course have their source in the social world and the world of things. The universe of sound consists in its ability to express or generate all manner of different and differently nuanced relationships: strife, loneliness, desolation, resentment, alienation, and tension, as well as yearning, refuge, security, love, responsivity. This pure relational quality adheres to music in all of its manifestations, high culture as well as pop culture, and allows us to comprehend how it is that music and dance have always been so closely linked. …
[95] Only from this perspective can we understand how, on the one hand, music possesses the power to change the way we are situated in the world (our “attunement”), while, on the other hand, we crave different kinds of music depending on our relationship to the world at a certain moment. Even (and especially) music that expresses sadness, melancholy, hopelessness, or strife is capable of moving us, because we are able to experience it as resonating with our own sadness, melancholy, or strife, i.e. with our own relationships to the world. We experience being moved by such sounds as something positive (even and especially when we are brought to tears) and not at all as something that itself makes us depressed. To the contrary, it is when we are no longer touched, moved, or gripped by music that we experience alienation or, in extreme cases, depression, as it is then that we experience the world as mute, even as it is still so loud. …
If my contention is correct that music negotiates the quality of relation (to the world) itself, then we can begin to understand the eminently important function that it is capable of fulfilling in modern society. Music affirms and potentially corrects, moderates, and modifies our relation to the world, repeatedly re-establishing it as the “ur-relationship” from which subject and world origi-nate… Seen from this perspective, the “musicalization” of the world since the twentieth century seems to be an almost inevitable correlate (because complementary in its effects) to the growing reification of our two-sided bodily relationship to the world[.]
— Hartmut Rosa (tr. James C. Wagner), Resonance: A Sociology of Our Relationship to the World (London: Polity, 2019), 94–95
[Our] work is not done simply by distinguishing between good resonance and bad alienation. Rather, it is here that our conceptual problems begin. First, it is possible to identify experiences that exhibit characteristics of “negative” resonance, either because they are directly harmful to subjects or because they have normatively undesirable or even disastrous “side-effects.” Second, the longing for total and lasting resonance with the world itself turns out to be a subjectively pathological and in political terms potentially totalitarian tendency. Third (and relat-edly), we shall see that forms and phases of alienation are not only unavoidable, but also required for the subsequent development of resonant relationships. It will, moreover, prove necessary to conceptually differentiate between brief, often intense moments of resonant experience and lasting resonant relationships, which are necessary to provide a stable and reliable basis for such repeatable experiences.
— Hartmut Rosa (tr. James C. Wagner), Resonance: A Sociology of Our Relationship to the World, 39
Eighteenth-century man was the man who could no longer remain ignorant of the significance of the fact that Copernicus and Galileo were right, that this vast and rich earth of his, the theatre of his deeds was not the centre of the universe, but a grain of dust amid countless others in this universe, and who clearly saw the consequences of all this. What did this really apocalyptic revolution in his picture of the universe mean for man? An unprecedented and boundless humiliation of man? No, said the man of the eighteenth century, who was not the first to gain this knowledge, but certainly the first to realize it fully and completely; no, man is all the greater for this, man is in the centre of all things, in a quite different sense, too, for he was able to discover this revolutionary truth by his own resources and to think it abstractly, again to consider and penetrate a world which had expanded overnight into infinity—and without anything else having changed, without his having to pay for it in any [24] way: clearly now the world was even more and properly so his world! It is paradoxical and yet it is a fact that the answer to his humiliation was those philosophical systems of rationalism, empiricism and scepticism which made men even more self-confident. The geocentric picture of the universe was replaced as a matter of course by the anthropocentric.
— Karl Barth, Protestant Theology in the Nineteenth Century: Its Background and History, 23–24
The dispute arises in part because there are really two types of continents: Those recognized by cultures around the world, and those recognized by geologists. Cultures can define a continent any way they want, while geologists have to use a definition. And geological research in recent years has made defining continental boundaries less simple than it might have once seemed as researchers find evidence of unexpected continental material.
[While] the Enlightenment — heterogeneous, contradictory, and complex as its ideas may have been — did gradually come to establish the concept of the self-determined way of life as an effective cultural benchmark in the realms of politics and pedagogy, religion and aesthetics, the economy and everyday practice, it generally tended to supplement this concept with the idea that reason, nature, and the (political) common good would come to provide a “natural” limit to the spaces opened up by the ideal of self-determination and thus a more or less generalizable, socially acceptable way of life and concept of happiness. Over the course of the nineteenth and especially the twentieth century, the demand for self-determination expanded into ever more spheres of life, while the idea that this demand could be substantially or essentially limited by reason, nature, and community became increasingly less plausible and lost much of its binding force. At the same time, social institutions were gradually reshaped to become dependent on anonymous actors. From education to the professions [19], from the supermarket to party democracy, from the religious constitution to the art market to the use of media, subjects capable of acting and making decisions in accordance with individual preferences have become a functional requirement of modern institutions.
— Hartmut Rosa (tr. James C. Wagner), Resonance: A Sociology of Our Relationship to the World (London: Polity, 2019), 18–19
Relativ-ism, plural-ism, modern-ism, secular-ism — these are agendas, characteristic to the ethos of the present age, which we must resist accommodating or tacitly embracing in our thoughts, plans, and decisions; such resistance depends on carefully cultivating our core commitments and pruning the habits that express those commitments.
Relativ-ity, plural-ity, modern-ity, secular-ity — these are brute facts, descriptive of the reality of the present age, which will provide friction and resistance to our thoughts, plans, and decisions unless we deliberately shape our plans and decisions to them; such shaping depends on carefully observing how these facts work in and on us and our neighbors.
Polanyi has the rationalists’ number, a decade before Foucault et al:
I do not suggest, of course, that those who advocate philosophic doubt as a general solvent of error and a cure for all fanaticism would desire to bring up children without any rational guidance or contemplate any other scheme of universal hebetation. I am only saying that this would be what their principles demand. What they actually want is not expressed but concealed by their declared principles. They want their own beliefs to be taught to children and accepted by everybody, for they are convinced that this would save the world from error and strife. In his Conway Lecture of 1922, republished in 1941, Bertrand Russell revealed this in a single sentence. After condemning both Bolshevism and clericalism as two opposite dogmatic teachings, which should both be combated by philosophic doubt, he sums up by saying: ‘Thus rational doubt alone, if it could be generated, would suffice to introduce the Millennium.’ The author’s intention is clear: he intends to spread certain doubts which he believes to be justified. He does not want us to believe the doctrines of the Catholic Church, which he denies and dislikes, and he also wants us to resist Lenin’s teaching of unbridled revolutionary violence. These disbeliefs are recommended as ‘rational doubts’. Philosophic doubt is thus kept on the leash and prevented from calling in question anything that the [sceptic] believes in, or from approving of any doubt that he does not share. … Since the sceptic does not consider it rational to doubt what he himself believes, the advocacy of ‘rational doubt’ is merely the sceptic’s way of advocating his own beliefs.
— Michael Polanyi, Personal Knowledge: Toward a Post-Critical Philosophy, 297
I’m continuing to be amazed by the depth and prescience of Polanyi’s Personal Knowledge. Here is a sequence of quotations from (really, the bulk of) chapter 8, “The Logic of Affirmation,” section 12, “The Fiduciary Programme”:
The critical movement, which seems to be nearing the end of its course today, was perhaps the most fruitful effort ever sustained by the human mind. The past four or five centuries, which have gradually destroyed or overshadowed the whole medieval cosmos, have enriched us mentally and morally [TC: morally? somewhat dubious] to an extent unrivalled by any period of similar duration. But its incandescence had fed on the combustion of the Christian heritage in the oxygen of Greek rationalism, and when this fuel was exhausted the critical framework itself burnt away. [pp. 265–66]
We must now recognize belief once more as the source of all knowledge. Tacit assent and intellectual passions, the sharing of the idiom and of a cultural heritage, affiliation to a like-minded community: such are the impulses which shape our vision of the nature of things on which we rely for our mastery of things. No intelligence, however critical or original, can operate outside such a fiduciary framework.[p. 266]
Our mind lives in action, and any attempt to specify its presuppositions produces a set of axioms which cannot tell us why we should accept them. Science exists only to the extent to which there lives a passion for its beauty, a beauty believed to be universal and eternal. Yet we know also that our own sense of this beauty is uncertain, its full appreciation being limited to a handful of adepts, and its transmission to posterity insecure. Beliefs held by so few and so precariously are not indubitable in any empirical sense. Our basic beliefs are indubitable only in the sense that we believe them to be so. Otherwise they are not even beliefs, but merely somebody’s states of mind. [p. 267]
[We] can voice our ultimate convictions only from within our convictions—from within the whole system of acceptances that are logically prior to any particular assertion of our own, prior to the holding of any piece of knowledge. If an ultimate logical level is to be attained and made explicit, this must be a declaration of my personal beliefs… An example of a logically consistent exposition of fundamental beliefs is St. Augustine’s Confessions. Its first ten books contain an account of the period before his conversion and of his struggle for the faith he was yet lacking. Yet the whole of this process is interpreted by him from the point of view which he reached after his conversion. He seems to acknowledge that you cannot expose an error by interpreting it from the premisses which lead to it, but only from premisses which are believed to be true. His maxim nisi credideritis non intelligitis [“unless ye believe, ye shall not understand”] expresses this logical requirement. It says, as I understand it, that the process of examining any topic is both an exploration of the topic, and an exegesis of our fundamental beliefs in the light of which we approach it; a dialectical combination of exploration and exegesis. Our fundamental beliefs are continuously reconsidered in the course of such a process, but only within the scope of their own basic premises. [p. 267]
[The] greatly increased critical powers of man… have endowed our mind with a capacity for self-transcendence of which we can never again divest ourselves. We have plucked from the Tree a second apple which has for ever imperilled [sic] our knowledge of Good and Evil, and we must learn to know these qualities henceforth in the blinding light of our new analytical powers. Humanity has been deprived a second time of its innocence, and driven out of another garden which was, at any rate, a Fool’s Paradise. Innocently, we had trusted that we could be relieved of all personal responsibility for our beliefs by objective criteria of validity—and our own critical powers have shattered this hope. Struck by our sudden nakedness, we may try to brazen it out by flaunting it in a profession of nihilism. But modern man’s immorality is unstable. Presently his moral passions reassert themselves in objectivist disguise and the scientistic Minotaur is born. The alternative to this, which I am seeking to establish here, is to restore to us once more the power for the deliberate holding of unproven beliefs. We should be able to profess now knowingly and openly those beliefs which could be tacitly taken for granted in the days before modern philosophic criticism reached its present incisiveness. Such powers may appear dangerous. But a dogmatic orthodoxy can be kept in check both internally and externally, while a creed inverted into a science is both blind and deceptive. [p. 268]
Recall: this work was written at the same time as Gadamer’s Truth and Method and Thomas Kuhn’s The Structure of Scientific Revolutions, and predates by over two decades MacIntyre’s trilogy that runs from After Virtue through Whose Justice? Which Rationality? to Three Rival Versions of Moral Enquiry. Those works, of course, have immense value in their own right (though I am beginning to suspect that Kuhn ought to be read as basically a special case of Polanyi). Yet it is remarkable how many of their core insights are anticipated here and elsewhere. MacIntyre’s “there is no rationality that is not of some tradition”? Here it is. Gadamer’s “the Enlightenment instilled an unjustified prejudice against prejudices”? Bingo. Kuhn’s recognition of regnant scientific “paradigms” that depend largely on the standards of scientific satisfaction in a given period, rather than the totality of available evidence? Ding, ding, ding. It’s all in here, folks, at least in highly compressed form.
Of course, Polanyi’s prose — which I am finding far slower even than MacIntyre’s — is a real hindrance to his reception. (The above quotations, some of which are remarkably snappy, are not exactly representative!) But in my own fields of theology and biblical studies, I cannot help thinking that discussions of method and comparison of different works which do not attend to these core insights amount only to so many exercises in wheel-spinning. Such exercises, at best, may result in a good workout — but at the end you are still sitting in the same place you were, with a great deal of sweat and exhaustion but no forward progress to show for it. Is forward progress then possible? That is the burden of the final chapters of Personal Knowledge.
A great analysis by Polanyi of the cultural structure that maintains (or at least maintained) Marxism:
This is the characteristic structure of what I shall call a dynamo-objective coupling. Alleged scientific assertions, which are accepted as such because they satisfy moral passions, will excite these passions further, and thus lend increased convincing power to the scientific affirmations in question—and so on, indefinitely. Moreover, such a dynamo-objective coupling is also potent in its own defence. Any criticism of its scientific part is rebutted by the moral passions behind it, while any moral objections to it are coldly brushed aside by invoking the inexorable verdict of its scientific findings. Each of the two components, the dynamic and the objective, takes it in turn to draw attention away from the other when it is under attack.
We can see that this structure underlies also a logical fallacy exposed by the academic critics of Marxism, and explains why the fallacy survives its exposure. The critics say that no political programme can be derived from the Marxian prediction of the inevitable destruction of Capitalism at the hands of the proletariat. For it is senseless to enlist fighters for a battle which is said to be already decided; while if the battle is not yet decided, you cannot predict its issue. But within a dynamo-objective coupling, the logical objection against using a historical prediction as an appeal to fight for the certain outcome of history no longer arises. For the prediction is accepted only because we believe that the Socialist cause is just; and this implies that Socialist action is right. The prediction implies therefore a call to action.
— Michael Polanyi, Personal Knowledge: Toward a Post-Critical Philosophy, 230–31
Three incredibly important paragraphs from Polanyi:
Every acceptance of authority is qualified by some measure of reaction to it or even against it. Submission to a consensus is always accompanied to some extent by the imposition of one’s views on the consensus to which we submit. Every time we use a word in speaking and writing we both comply with usage and at the same time somewhat modify the existing usage; every time I select a programme on the radio I modify a little the balance of current cultural valuations; even when I make my purchase at current prices I slightly modify the whole price system. Indeed, whenever I submit to a current consensus, I inevitably modify its teaching; for I submit to what I myself think it teaches and by joining the consensus on these terms I affect its content. On the other hand, even the sharpest dissent still operates by partial submission to an existing consensus: for the revolutionary must speak in terms that people can understand. Moreover, every dissenter is a teacher. The figures of Antigone and of the Socrates of the Apology are monuments of the dissenter as law-giver. So are also the prophets of the Old Testament—and so is a Luther, or a Calvin. All modern revolutionaries since the Jacobins demonstrate likewise that dissent does not seek to abolish public authority, but to claim it for itself.
Admittedly, submission to authority is in general less deliberately assertive than is an act of dissent. But not always. St. Augustine’s struggle for belief in revelation was much more dynamic and original than is the rejection of revelation by a religiously brought up young man today. In any case, at every step of the process by which we are brought up and continue to participate in an established consensus, we exercise some measure of choice between different degrees of conformity and dissent, and either of these choices may mean a more passive or a more assertive reaction.
We should realize at the same time how inevitable, and how unceasing and comprehensive are such accreditive decisions. I cannot speak of a scientific fact, of a word, of a poem or a boxing champion; of last week’s murder or the Queen of England; of money or music or the fashion in hats, of what is just or unjust, trivial, amusing, boring or scandalous, without implying a reference to a consensus by which these matters are acknowledged—or denied to be—what I declare them to be. I must continually endorse the existing consensus or dissent from it to some degree, and in either case I express what I believe the consensus ought to be in respect to whatever I speak of. The present text, in which I have described in my own way the interaction of every utterance with the public consensus, is no exception to what I have said in the text about utterances of this kind. Throughout this book I am affirming my own beliefs, and more particularly so when I insist, as I do here, that such personal affirmations and choices are inescapable, and, when I argue, as I shall do, that this is all that can be required of me.
— Michael Polanyi, Personal Knowledge: Towards a Post-Critical Philosophy, 208–09
We should also remember that the rules of induction have lent their support throughout the ages to beliefs that are contrary to those of science. Astrology has been sustained for 3000 years by empirical evidence confirming the predictions of horoscopes. This represents the longest chain of historically known empirical generalizations. For many prehistoric centuries the theories embodied in magic and witchcraft appeared to be strikingly confirmed by events in the eyes of those who believed in magic and witchcraft. Lecky rightly points out that the destruction of belief in witchcraft during the sixteenth and seventeenth centuries was achieved in the face of an overwhelming, and still rapidly growing, body of evidence for its reality. Those who denied that witches existed did not attempt to explain this evidence at all, but successfully urged that it be disregarded. Glanvill, who was one of the founders of the Royal Society, not unreasonably denounced this method as unscientific, on the ground of the professed empiricism of contemporary science. Some of the unexplained evidence for witchcraft was indeed buried for good, and only struggled painfully to light two centuries later when it was eventually recognized as the manifestation of hypnotic powers.
— Michael Polanyi, Personal Knowledge: Toward a Post-Critical Philosophy, 168
“[Dynamic] relationships are not only more important than the entities related, but… ontologically prior to them — so that what we call ‘things’ arise out of the web of interconnectedness, not the web out of the things.”
— Iain McGilchrist, The Matter with Things: Our Brains, Our Delusions, and the Unmaking of the World, 1224
It is the normal practice of scientists to ignore evidence which appears incompatible with the accepted system of scientific knowledge, in the hope that it will eventually prove false or irrelevant. The wise neglect of such evidence prevents scientific laboratories fron being plunged forever into a turmoil of incoherent and futile efforts to verify false allegations. But there is, unfortunately, no rule by which to avoid the risk of occasionally disregarding thereby true evidence which conflicts (or seems to conflict) with the current teachings of science. During the eighteenth century the French Academy of Science stubbornly denied the evidence for the fall of meteorites, which seemed massively obvious to everybody else. Their opposition to the superstitious beliefs which popular tradition attached to such heavenly intervention blinded them to the facts in question.
— Michael Polanyi, Personal Knowledge: Toward a Post-Critical Philosophy, 138
This column / book review by N. S. Lyons is worthwhile — as much for its ultimate affirmation that this may be “neither the best nor the worst of times, but simply the time we have been given” as anything else. There is one feature I find odd. Toward the end of the piece, Lyons cites Jordan Peterson’s recent proclamation that we are living on the cusp of (or indeed in the early moments of) the Counter-Enlightenment. He then goes on to cite Oswald Spengler’s suggestion in The Decline of the West that the collapse of the “age of theory” might give way to a “sweeping re-Christianization” (Lyons’s term, not Spengler’s). The effect is to suggest that “the Counter-Enlightenment” and the “sweeping re-Christianization” will be, if not perfectly co-constitutive, at least a 90% overlapping Venn diagram.
But, as Lyons (and Peterson) surely know, there have been many Counter-Enlightenments before, and likely will be again before Enlightened modernity has run its course. Probably a majority of the most celebrated philosophical thinkers active since 1800 have been, in some sense, Counter-Enlightenment figures: Hegel, Schopenhauer, Nietzsche, Husserl, Heidegger, Spengler (!), Scheler, Gadamer, Derrida, Foucault are the first ten names that come to my mind, and obviously there are others — Wittgenstein, anyone? (Crack open the bibliography of Iain McGilchrist’s The Matter with Things for many more!) The interwar German philosophical coterie of which Heidegger was the most prominent figure even seems to have self-consciously identified as a new Counter-Enlightenment school. None of these figures, whatever their individual religious beliefs, can really be said to have contributed to any sort of sweeping re-Christianization, though in my estimation some are more readily appropriated for the tasks of Christian philosophy and theology (Gadamer, Wittgenstein, and — in a roundabout way — Nietzsche) than others (Schopenhauer, Heidegger, Foucault, and probably Derrida too, whatever Jamie Smith says).
And — to turn the screw further — what could be more quintessentially Enlightenment in its underlying attitude than, say, a project to refound all of metaphysics from first principles? Every Counter-Enlightenment inevitably has a great deal of Enlightenment still in it. That is because the Enlightenment is not a philosophical school — Wolffian deductive rationalism, Kantian transcendental idealism, Benthamite utilitarianism, or whatever it is that Steven Pinker and Peter Singer have in common — so much as a set of postures, habits, and — for lack of a better word — vibes. An extremely persistent and evolutionarily successful set of postures, habits, and vibes, no less, which has spent the better part of three hundred years displaying an extraordinary capacity to adapt and co-opt opposition. The Enlightenment mold, it seems, cannot be shattered from within: now that Kant’s “sapere aude!" has become conventional wisdom, anyone who self-consciously tries to break with it is still, by definition, daring (in some measure) to use their own understanding. Once one has grown up and been educated under the plausibility structures of post-Enlightenment modernity, it is extremely difficult to shake them off and abandon them entirely. (See also: theologically educated Protestants converting to Roman Catholicism.) Neither can the dialectic of Enlightenment be simply ignored; its embodiment in modern technologies and technological society shows it is almost no use deciding you are simply uninterested in the dialectic, since the dialectic remains just as rapaciously interested in you. The rise of a purportedly Counter-Enlightenment movement in Western public life neither guarantees a sweeping re-Christianization of society nor promises a breaking out of the Enlightenment mold.
An oddity of philosophical / theological history: the great minds, whom we now remember, often developed their ideas in contradistinction from, not principally a preceding great mind who founded a school, but that school’s later and lesser lights who took their founder’s insight too far — whom we do not now remember.
[Despite] our always contributing to the reality we experience, there is something apart from ourselves to which we can be true — that reality, in other words, is not purely made up by the brain. There is a relationship there — something to be true to. Assuming there is something there to know implies that some understandings will inevitably be better than others. And since each hemisphere provides a different understanding of it, it is perfectly coherent — and indeed necessary — to ask which is superior. (The validity of the question is not affected by the observation that we can, and may be best to, use both.) If a pilot is flying blind and has two navigation systems to rely on, each of which, though they differ, provides significant information, the criterion for having to prefer one over the other is clear: following which one is less likely to lead to a crash. Or again, as a piece of music cannot be experienced without a player, who inflects what it is that we hear, there is nonetheless such a thing as a better or worse performance, one that is more or less faithful to the potential enshrined in the piece — a potential that is, essentially, the piece of music, and becomes realised in every true performance, The arbiter, then, in either case, is the experience of the whole embodied person as he or she responds to a more, or less, accurate — a richer, or poorer — account of the world.
— Iain McGilchrist, The Matter With Things: Our Brains, Our Delusions, and the Unmaking of the World, 1:379–80.