Thesis: The appearance of effortless inhumanity is practically always dependent on the sacrifice or exploitation of hidden persons.
practical knowledge and “scientific” ignorance
#Why, then, the unscientific scorn for practical knowledge? There are at least three reasons for it, as far as I can tell. The first is the “professional” reason mentioned earlier: the more the cultivator knows, the less the importance of the specialist and his institutions. The second is the simple reflex of high modernism: namely, a contempt for history and past knowledge. As the scientist is always associated with the modern and the indigenous cultivator with the past that modernism will banish, the scientist feels that he or she has little to learn from that quarter. The third reason is that practical knowledge is represented and codified in a form uncongenial to scientific agriculture. From a narrow scientific view, nothing is known until and unless it is proven in a tightly controlled experiment. Knowledge that arrives in any form other than through the techniques and instruments of formal scientific procedure does not deserve to be taken seriously. The imperial pretense of scientific modernism admits knowledge only if it arrives through the aperture that the experimental method has constructed for its admission. Traditional practices, codified as they are in practice and in folk sayings, are seen presumptively as not meriting attention, let alone verification. And yet, as we have seen, cultivators have devised and perfected a host of techniques that do work, producing desirable results in crop production, pest control, soil preservation, and so forth. By constantly observing the results of their field experiments and retaining those methods that succeed, the farmers have discovered and refined practices that work, without knowing the precise chemical or physical reasons why they work. In agriculture, as in many other fields, “practice has long preceded theory.” And indeed some of these practically successful techniques, which involve a large number of simultaneously interacting variables, may never be fully understood by the techniques of science.
— James C. Scott, Seeing Like a State: Why Certain Schemes to Improve the Human Condition Have Failed, 305–06
impersonal knowledge
#Why you should not use ChatGPT, large language models, or other “artificial intelligence” (falsely so-called) tools in your research or work, for any of the synthetic tasks (summaries of data or information, etc.) for which it is proposed as a helpful time-saver:
- The process of pattern recognition and synthetic integration is the basis of how human beings come to know and understand the world.
- This process is an inextricably bodily process in humans. (This is true of human cognition in general: the whole of the human body, not just the brain, is involved in every act of thought — and in fact other bodies are involved, too, because thought is an intersubjective process. But I digress.)
- ChatGPT and similar tools are, however, definitionally disembodied. Even if they are in fact “just pattern-recognition machines” (dubious), by virtue of being disembodied their pattern “recognition” is not the same as the real thing in humans.
- In fact, insofar as ChatGPT exists in the physical world, it is under a very different sort of embodiment — a non-organic sort — which is antithetical to the human sort.
- Therefore, ChatGPT and so forth cannot be trusted to faithfully simulate human knowing — and if the mechanism cannot be trusted neither can the results.
- Additionally, by using such a tool, a human being forgoes the opportunity to practice and experience such knowing, kneecapping his or her capacity to learn from the experience.
In a nutshell: the promise of LLMs is “impersonal knowledge” — but no such thing exists. Relying on it is thus, in a meaningful sense, worse than nothing.