Tired Adults May Learn Language like Children Do

Just stumbled upon this article. It’s rather short, but it addresses what I’ve been thinking lately about conscious Vs unconscious language learning.

I’m becoming more and more convinced that sound recognition is key, not word/sentence structure recognition.

I feel like languages are made up of a series of lots of common blocks (that follow regular patterns) of sound. As native speakers, we don’t really hear the words being spoken, or even notice them, but rather the sounds they make in combination. That’s why, as native speakers, we instantly notice when someone says something incorrectly, no matter how subtle. It’s like a pianist striking the wrong key.

During my immersion, I’ve been trying to move away (it’s very hard to do this for a lot of us) from analysing what I hear, but instead to just hear the sounds and infer meaning.

I catch myself doing this at times, before my adult, analytical brain takes over once more. I’m at the stage where I can hear pieces of my TL and understand it, without really knowing exactly what words were used. My thinking is that if that’s even possible - and it is - then surely it’s a huge clue that sound recognition is the key to acquistion, and any form of analysis is potentially useless (in the big picture).

I’ve also noticed that the faster I read the more overall understanding I get from each sentence. When I read slowly, paying close attention in an attempt to analyse the language, my ability to infer overall meaning is reduced.

The difficult thing is to get your brain to stop immeditately switching to ‘analytical mode’ each and every time it detects TL. :grin:

As a sidenote, the more level-appropriate content I consume, the more I’m able to shut off the analytical brain, which would fall in line with what Krashen’s ‘comprehensible input’ theory tells us.

Anyway, I hope I’m making sense. Here’s the article:


Yes, sound is the base but we have messed up a lot with grammar since the beginning, because we didn’t understand and we still don’t understand, how people really communicate. Otherwise our books and methods would be totally different (Imho). And we wouldn’t have thousands of exceptions to rules that don’t make sense and that we keep perpetuating because we just like to do so (ironic!).

But now we have to deal with the structure of the language we have created on the top of the sound foundation we had.

At the end, the goal would be to reach that point where you understand that that particularly word “doesn’t sound right”, that sentence “probably is not good”, the orthography is not correct “because something is wrong”.

You start to have that connection with the language that you can’t explain but you start to “feel” the language. Both in sound and in writing.

So probably now we need to feed a double beast: the connection with the sound and the connection with the structure we have created.

And each time you learn a new language or you are at that breaking point, all other languages start to shatter and you lose certainty on many things, until your brain integrate everything and magically create an entire integration of all.

EDIT: I’ve just answer because I feel a Tired Adult! :smiley:


I have the same experience as you, @helion, and I’m certain we’re not unique. I will often prefer listening over reading (sometimes at enhanced speeds) to more easily ingest certain content. And like you, when “in the groove”, won’t hear individual words but absorb the meaning. Of course, my target language is different than many in that it uses a different alphabet which also slows me down even though I know it very, very well. i just can’t scan it like I can English or others in the Latin alphabet.

It’s very interesting to see young children acquire language. My oldest started speaking with grammatically correct full sentences almost from the beginning. My wife was concerned when his younger sister started speaking very poorly, but was reassured that she was the “normal” one. Her pronunciation was also very poor, but her brother could “interpret” for us when we couldn’t understand her. :slight_smile:

Now I have two young granddaughters. Their progression is very interesting, and since I don’t see them every single day their advances are perhaps more noticeable. The younger of the two is the more verbal. She started babbling to herself in very conversational tones, imitating full sentences, well before she was 2. I find that very interesting, and it is perhaps germane to your ideas. But I’m no developmental psychologist.


So interesting that you wrote about that. I was just wondering about the different stages of development in a child’s output. The process is fascinating.

I’d really like to understand more about that process, and how it is that they seemingly go from connected words to fully formed (and often quite grammatically correct) sentences so quickly. It has to come from a previously stored up reservoir of subconscious input, ready to burst its banks. It’s like when Steve talks about building up your potential in a language through input. Get enough input and the output will come very quickly (when we’re ready). IMO, it gives credence to a ‘silent period.’

The question then becomes how much is ‘enough’? My feeling is that it’s way more than most of us allow for, concerning our output expectations.


But is sound itself the structure? Are the “rules” just a hopeful, but ultimately doomed attempt to explain the unexplainable to our analytical brains? Which, ironically, are the very mechanisms which block acquisition.

By learning the “rules” are we just setting into motion this pathological need we have as adults to give logic to something that has little to no logic? And does this hinder, in no small way, our ability to acquire naturally?

I might be starting to sound like I’m on weed now, which may not be such a disaster when it comes to language acquisition, haha. Just kiddin.’ Don’t do drugs, kids. :grin:


haha, I used to have those trips too but now I became more pragmatic or I simply let go what I don’t grasp yet.

I believe there is a structure in language communication, learning and acquiring, and the foundation is the sound. But we just don’t understand it yet.
If the sound is the structure itself I cannot know but I suppose it is more a frequency or range of frequencies of the “big sound”. And every living form has its own frequency.

We can hear some of the frequencies, we can’t hear others but communication exists anyway. We are just far from knowing anything. We can just perceive that we know nothing.

Before there were any stars or galaxies, 13.8 billion years ago, our universe was just a ball of hot plasma – a mixture of electrons, protons, and light. Sound waves shook this infant universe, triggered by minute, or “quantum,” fluctuations happening just moments after the big bang that created our universe.

“The difficult thing is to get your brain to stop immeditately switching to ‘analytical mode’”
No, there’s nothing “difficult” about it.
Just use an ultrareading while listening approach with an elevated audio speed (usually: 1.25 - 1.7) as a pace maker. Using this approach,you don’t have time for long-winded analysis processes or attempts to translate into your L1.

The more often you do that, the easier (i.e., more automatic) it becomes…


“As native speakers, we don’t really hear the words being spoken, or even notice them, but rather the sounds they make in combination.”
Doesn’t make sense either: as native speakers or advanced L2 speakers we focus normally on the meaning processing, ie., the content.

Of course, you need sound in oral communication, but language consists of "form-meaning (!) pairs (in modern linguistics called signs as “signifiant / signifié”, etc.). In other words, sound without the corresponding meaning is not a language, it’s just music or noise.

Apart from that, identifying sound “patterns” is an extremely complicated process probably based on non-essentialist categorizations (see, for example: Wittgenstein’s family resemblance, Prototype theory - Wikipedia, Derrida’s itérabilté, etc.) that transcends traditional type (“Ur-form / -schema”, etc.) - token relationships where the type is seen as an “essence” (this type of thinking has been completely “destroyed” by poststructuralist and similar authors in the last 50-60 years).

If you’re “really” interested in this stuff, you should read literature about cognitive science.
Just trying to use common sense will lead you, well: nowhere :slight_smile:


You do if you can’t stop pausing it. :joy:


Analysis requires time and mental energy… you don’t have that luxury with ultra-reading while listening, because it’s way too fast for that (having tested it with several Germanic and Romance languages for more than 1000 hours, I’m pretty sure about that).

For someone as advanced in Spanish as you are (with more than 4000 hours of listening under your belt), it should be a “walk in the park”…

1 Like

Again, if I miss something, at any speed, my analytical brain will kick in and I’ll be compelled to pause it, no matter how much I know I shouldn’t. If I don’t do that, I tend to just stop listening altogether. It’s something I’m always fighting against, and have yet to solve.

I’m not sure where you got ‘4k hours of listening’ from? I honestly can’t accurately tell you how many hours of listening I have, but it’s probably more than 1k and less than 2k. As it goes, my overall ‘contact’ with the language probably equates to around 3-4k hours, but I didn’t track that, it’s only an estimate.

1 Like

K so I’m just gonna shoot some armchair philosophy from the hip here…

TLDR: If you want to avoid the bar-room long spiel below, I agree with you: sound is the thing for building up a subconscious language specific language module in your brain.

My personal theory is that humans are optimized for learning languages by listening/speaking and that since reading/writing is less than 4,000 years old, those of us who are good at learning from reading/writing are in fact co-opting an entirely different system to be used in the service of language learning. One that not everyone has.

Additionally to that I think we develop separate language centers when we cross the threshold of fluency: a language center is a subconsciou module if you will that doesn’t include translation: the sounds of the words just hit the language center and you know what the meaning is without having to relate it back to your native language (in this case English).

I’m already there with Spanish. There are words in my language center that I have learned within Spanish that I didn’t learn by means of English. I have to think about it to translate to English but I just know what it means in Spanish. For example “me caes gordo” translates as “to me you fall fat”. Doesn’t make much sense in English but in Spanish the correct translation is “you annoy me”. There are also words in Spanish that I have connected somehow right back to my internal representation of meaning which I got from my initial/original exposure to the concept when learning English. For example: for me a “pot” is a black pot with a black handle, mid-size and a “dog” is a medium-large size dog: those are my personal concepts of those two things.

I think as you say, it’s repeated exposure to the sounds of the language over time. I think that is what builds up the subconscious language center.

The crux is, however, it has to build up gradually from smaller chunks.
That’s why TPRS works.

I surmise therefore that if there was a way to do TPRS all the way up with every single word/phrase explained in the language then that would be the natural way to go.

Anyhow, that was a ramble, TLDR I agree with you.


TLDR: Riffing on what you’re saying.

The wittgenstein type thing (remembering some drunken university conversations with drunk philosophy students) is talking about the essence of the meaning. I think that’s right, just that the essence of the meaning is your personal essence of it. What we’re doing in our brain IMO when we’re learning a language is trying to link that essence of the meaning (concept) to some external representation (whether it’s a sound or a symbol). We can either do that de novo in the new language with an entirely new concept which has to be learned and linked to the new language sound/symbol OR we can attempt to link a pre-existing concept in our brains to the new sounds.

I think that it’s way easier to link cognates because the connected sound/symbol is already linked to the inner concept.

I’m the same. My brain goes “shiny” and halts to examine it. I then lose the rest.

I wonder about babbling. Children do it and adults don’t. Is there something about babbling we are missing?

Yeah. It doesn’t sound right is the key.
But there is also “it doesn’t read right”.
Having said that I think that doesn’t sound right is the primordial.

I think we get the rules from repeated over and over exposure. If we’re paying attention and are capable of noticing.

I am going to obviate the issue a little bit here, but if you simply pull up the text and read with the audiobook at high speed you literally have no choice but to go forward. If you are really interested in a specific word, LingQ it and make a “better” LingQ after you are done. Do not pause, do not stop, just LingQ the word and read further.

Ultrareading takes away your ability to analyze beyond a few seconds, and takes away any need to subvocalize. You eventually should learn to listen or just read things, without R+L, but the runway before that stops being useful is very long.

I have to do the same thing taking my dingo for a walk. If we jog or walk a brisk pace, she has no choice but to keep moving. She gets a few seconds to smell things, but beyond that has to move. If I slow down to where she can fully stop, then I need to get her moving again which slows down the entire process.


The observation that you have to get auditory processing happening in order to develop capacity with a language is valid enough, but I don’t think we should construct an (ideo)logical opposition between auditory and visual modes of representation. Having a mapping between multiple modalities - the auditory and visual modalities - can be helpful in bootstrapping consciousness. That’s why (I think) reading while listening helps. You have an extra modality to use to make meaning out of the auditory stimulus.

While I was typing this out my four-year old came up to me with a long strong of letters he just wrote, spelling an unbroken series of nonsense-words, and asked me to “read” it. We both found it hilarious. What does this say about how children learn language? How quickly in development can “non-ancestral” modes of language acquisition help? Do children start engaging analytical-like modes of information processing before they can reflective grasp what they are doing, and therefore communicate it to us in adult-friendly ways? I don’t know. Maybe scientists know. Maybe they don’t.

I am a musician and so an analogy between music theory and grammar (understood as all reflective, non auditory, non “ancestral” processing of linguistic information) comes quickly to mind.

A musician who “knows” theory operates in a different way than a musician who does not, even though the theory is not being processed in music-making through the same conscious system that is used to analyze a score. Really successful musicians tend to enjoy that reflective, higher order engagement with the music, and can easily “re-virtualize” it when they play and listen. They know how to “let it be there” without it distracting them from “immersion” in the act of music making. They also respond well to intelligent preparation of these processes by an experienced musician.

This is how I feel about grammar. I also found school-like textbook grammar exercises boring - mostly because they were busy work designed to trick me into memorizing something that it is useful to memorize, but in a stupefying “spoonful of sugar to make the medicine go down” way - but I don’t have a hostile relationship with grammar as such. I use grammar in teaching my kids foreign languages, but I try to use it in an artful way, prompting reflective awareness when it seems necessary and interesting and helpful, but also doing a ton of immersion (they are listening to Italian TV right now while they play together). If they can handle memorizing paradigms, and they do it effortlessly and quickly, I encourage them to do so, because I think it primes the pump of their listening-module. If they don’t do it effortlessly and quickly, I back off a little, but don’t throw it out entirely. There is a fine line between busy work and flirting with the mind to help it open up. The art of deliberate teaching is mostly about identifying that line and staying on the right side of it.

BTW, and to reinforce your point about the importance of auditory processing (now that I’ve qualified it a bit) I think the idea of surfing the meaning in sound that I am so used to doing as a musician translates to language acquisition. I can tell you that having personal experience with the process of “filling my ears with consciousness,” as well as shutting off unhelpful conscious cognition in performance, has taught me how to listen in a way that I deliberately use when I immerse in Italian. Passing information from analytic to non-analytic subsystems of the mind is a very useful skill that can be encouraged and somehow consciously guided.


@noxialisrex I’m really intrigued by your approach to R+L. Please could you say a bit more about how you work with Lingq’s system of unknown/lingqs/known words with this approach?

For instance, if you read through a lesson extensively with the sped up audio as a pacemaker, do you then return to the same text and read through ‘intensively’ marking words as lingqs/known? And do you read through the same texts multiple times extensively?

I imagine this has changed quite a bit as you’ve progressed in your knowledge of the languages you’re studying, but it’d be really useful to get a sense of how you manage vocabulary with this approach.