impersonal knowledge
#Why you should not use ChatGPT, large language models, or other “artificial intelligence” (falsely so-called) tools in your research or work, for any of the synthetic tasks (summaries of data or information, etc.) for which it is proposed as a helpful time-saver:
- The process of pattern recognition and synthetic integration is the basis of how human beings come to know and understand the world.
- This process is an inextricably bodily process in humans. (This is true of human cognition in general: the whole of the human body, not just the brain, is involved in every act of thought — and in fact other bodies are involved, too, because thought is an intersubjective process. But I digress.)
- ChatGPT and similar tools are, however, definitionally disembodied. Even if they are in fact “just pattern-recognition machines” (dubious), by virtue of being disembodied their pattern “recognition” is not the same as the real thing in humans.
- In fact, insofar as ChatGPT exists in the physical world, it is under a very different sort of embodiment — a non-organic sort — which is antithetical to the human sort.
- Therefore, ChatGPT and so forth cannot be trusted to faithfully simulate human knowing — and if the mechanism cannot be trusted neither can the results.
- Additionally, by using such a tool, a human being forgoes the opportunity to practice and experience such knowing, kneecapping his or her capacity to learn from the experience.
In a nutshell: the promise of LLMs is “impersonal knowledge” — but no such thing exists. Relying on it is thus, in a meaningful sense, worse than nothing.