Noam Chomsky: The False Promise of ChatGPT

Testing ideas for ChatGPT and suggestions for improvement

I am more interested in other potential applications of ChatGPT than language learning software.

A few of my fanciful thoughts are listed with examples as follows.

  1. To replace or complement a helpful website with curated articles and other valuable resources—Wikipedia, vocabulary.com, coupons.com, etc.
  2. To replace language tutors or natives by explaining things more clearly and thoroughly. The explanation could have been less formulaic and more conclusive sometimes. AI can even limit the new vocabulary to 5% or 10% based on interaction with the learner.
  3. Teach AI a set of linguistic rules, let it invent the new wheel in a brand-new language, and see how well AI conforms to the forms.
  4. Check to see how well AI adapts to metaphorical expressions or sentences with nuances. Would AI perform or “learn” to respond better if it fails to recognize the inquiry in the first place?
    Following is a suggestions list for improvement catering to a human end user.
  5. Less formulaic and avoid long paragraphs excerpt from elsewhere.
  6. Mimic human behavior by adopting a personality, forgetting things, showing hesitation with using backspace in typing, typing according to the speed of an average person, and setting away status to ignore a person, etc.
  7. The result should have been cross reference checked with other resources or a check for fallacy in the design.

Limitation on AI
http://norvig.com/chomsky.html
What did Chomsky mean, and is he right?

I take Chomsky’s points to be the following:
B. Accurately modeling linguistic facts is just butterfly collecting; what matters in science (and specifically linguistics) is the underlying principles.
Is he right? That’s a long-standing debate. These are my answers:

B. Science is a combination of gathering facts and making theories; neither can progress on its own. I think Chomsky is wrong to push the needle so far towards theory over facts; in the history of science, the laborious accumulation of facts is the dominant mode, not a novelty. The science of understanding language is no different than other sciences in this respect.

It might only be fair to put our most advanced AI from the future to the test.
We send a group of AI robotics to a planet to build and expand. They have the most sophisticated programming but limited math knowledge except for integers. Would they discover the Pythagorean theorem with inputs from hands-on experience over time? (Creativity) Would they adopt a new code of conduct with or without the basic moral subroutine pre-implemented? (Ethnicity and morality as a failsafe, a mandatory mechanism for the evolution of a race) Would any of them be looking at the moon and worrying if that AI might be someone with the brain in a vat?
(sentience and self-consciousness)

In summary, it is now an enhanced version of the Google search engine. The answer is as good as how we formulate our inquiry and form our judgment if we remember the saying, “To err is human.”

Something like simple emotion of excitement could be easier to detect. A mixture of guilt, remorse, anger, and relief is hard to tell from person to person, even for AI.

I did not aware that I was in an argument to validate my points. I was expressing my attitude toward the topic and thought I had made myself clear with the following utterance.

Overall, the theory of Universal Grammar, or loosely connected omnipresent components of human languages, applies more to the development of languages than the schematic part of language acquisition, as the governing rules bind it and are subject to dynamic changes simultaneously.

No linguistic rule will encompass all known human languages except for the common phenomenon of unpopular grammatical structures, words, etc., to be phased out and replaced by alternative ones throughout the evolution of the language.

Solved:

Btw, one’s gonna have more problems with people asking them such questions. :slight_smile: That’s why I love robots!

  1. a robot may not injure a human being or, through inaction, allow a human being to come to harm;

*sorry for the errors, looks like I’m getting too lazy to check on myself.

@Llearner

"Overall, the theory of Universal Grammar, or loosely connected omnipresent components of human languages, applies more to the development of languages than the schematic part of language acquisition, as the governing rules bind it and are subject to dynamic changes simultaneously.

No linguistic rule will encompass all known human languages except for the common phenomenon of unpopular grammatical structures, words, etc., to be phased out and replaced by alternative ones throughout the evolution of the language."
You could use your statement above also for anti-Chomsky / anti-UG positions based on linguistic bottom-up emergence (see some of the comments below) while refering to some “regularities” that can empirically be observed in some (but not all) languages.

And this means there isn’t much left of the positions of “language universals” or “UG”. Basically, there’s probably nothing left of Chomsky :slight_smile:

Awesome, S.I. Really awesome!

It looks like your first paragraph did the trick.
So the metaphorical problem is solved. I hope that’s not only the case because I wrote the answer to the ChatGPT folks :-0

Next time, I’ve got to use semantic fragments that are only loosely coupled using my fav poet, Arthur Rimbaud.

Thanks, you may have made me see my points more clearly. Anyways, it will be better for me to take a broad-minded approach toward linguistics in general since it’s a discipline on the subject with imprecise elements.

These are of course valid and interesting ideas, but I believe the current technology is not there yet. ChatGPT seems to struggle quite a bit with factual information. Especially its tendency to hallucinate pseudo information whenever it is out of its element makes it hard to use for the outlined purposes. Currently I just don’t have much confidence in its answers.

I feel currently the best use for language learners is to use it as a casual chat / conversation partner, but exclusively in the target language. Anytime non-target language instructions were introduced I have felt a drop in quality.

There is of course still the issue of overly long formulaic answers, this can however be remedied by modifying the initial prompt. If you first instruct it to give you only short, concise answers it can restrain itself and thus be more like a human chat partner.

The search engine approach is btw. currently tested at Bing, but I don’t have access. Generally, I don’t think these large language models are suited to contain large amounts of factual information, so hooking it up to a controlled subset of the Internet seems to be an excellent idea. Although Bing’s chatbot had its share of issues already: Why a Conversation With Bing’s Chatbot Left Me Deeply Unsettled - The New York Times

For the language learning purpose, I agree to use it as a chatbot with good judgment from knowledgeable learners is the best practice. I like to test out some language or AI-related ideas from time to time. The following two questions return drastically different results per input. It’s independent of instruction language.

Give me ten features in Chinese with an example unavailable in other languages.
A crush course on Chinese grammar, maybe?

Reversible Chinese Words
It’s an epic failure.

Creativity would be the first step in becoming an intelligent being, and AI must be able to “reason” and generate a new output of novelty from the existing data. Asking AI to serve on a jury or cast a vote on controversial social issues is the most challenging thing on earth. It will be a long time before we see anything like Vox 114, a holographic artificial intelligence librarian from The Time Machine(2002 film).