“that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought.”
No, probably not.
ChatGPT does not really process “language” because without the “meaning” dimension, there is simply “no” human language.
And, by the way, modern science in the 20th century has completely failed (it doesn’t matter which scientific discipline we’re talking about) in “determening the meaning” of words / sentences once and for all. That is:
-
The meaning of words / sentences is completely dependent on contexts.
-
Contexts can never be closed, i.e., there are neither absolute nor final contexts.
-
If we short-circuit 1) + 2) with each other, the consequence is: Contexts are always open, ergo; the meaning of words / sentences is always open, sensu: constantly shifting as well.
However, there are also some semantic aspects (“semes”) that are “more stable” when switching contexts. So, we have to consider two aspects at the same time:
- a radical openness of contexts and, therefore, meanings of words / sentences used in communication processes
- a kind of relative semantic stability across various contexts.
That’s basically the main idea of Derrida’s (non-)concept of “iterabilité” = the non-identical reproduction of words / sentences in always changing contexts.
This means that the idea of an “absolute” (true, valid, etc.) interpretation of a religious or any other text is completely absurd. If there were such a thing, language would implode in an instance → no context, no language, no consciousness, no human communication for coordinating behavior: just a black hole of media nothingness.
All kinds of scientific disciplines (without exception) had to learn this lesson in the 20th century - esp. after the collapse of (linguistic) structuralism in the late 1960s with the rise of “post-structuralist” and “difference-based” approaches (Jacques Derrida, Michel Foucault, Gilles Deleuze, Niklas Luhmann, Spencer Brown / Dirk Baecker, etc.).
In short, the “meaning” dimension of human language processing is an extremely “slippery beast”, esp. from a scientific point of view.
However, here “machine learning” comes into play: This branch of AI tends to circumvent the “slippery meaning beast” by just focusing on the mathematical and statistical processing of patterns in big data. And the really astonishing fact is that this simulation of language processing is sometimes so good that we humans think it’s like the real deal, i.e., the human processing of language.
Of course, that’s not completely the case because human minds can both “surf” on the associative waves of all kinds of sensory, linguistic, and non-linguisitic media forms and “switch” between literal and figurative interpretations in the blink of an eye:
no animal and no AI is - at least at the moment - able to match that.
And therefore my favorite AI “torture sentence” is: “Earth is a blue orange. Why is that?”
ChatGPT: “No, that’s not the case. Earth is the third planet seen from the sun (Wikipedia bla bla bla). It’s not an orange, it’s a planet, a planet, a planet, etc.”
Here the real chat with the AI ends and the fictional part begins:
Peter: “Yes, it can be seen as an orange once you switch to non-literal interpretations. And that’s how humans can process “any” media form. Therefore, ChatGPT, you’re still nothing but a text generator with formulaic responses…”
ChatGPT: “Let’s talk again when I’m connected to human brains.”
Peter silent and thinking: “Yes, that might be the end of the antropocene age as we know it…” (see Harari’s"Homo Deus", for ex.).