Just stumbled upon this article. It’s rather short, but it addresses what I’ve been thinking lately about conscious Vs unconscious language learning.
I’m becoming more and more convinced that sound recognition is key, not word/sentence structure recognition.
I feel like languages are made up of a series of lots of common blocks (that follow regular patterns) of sound. As native speakers, we don’t really hear the words being spoken, or even notice them, but rather the sounds they make in combination. That’s why, as native speakers, we instantly notice when someone says something incorrectly, no matter how subtle. It’s like a pianist striking the wrong key.
During my immersion, I’ve been trying to move away (it’s very hard to do this for a lot of us) from analysing what I hear, but instead to just hear the sounds and infer meaning.
I catch myself doing this at times, before my adult, analytical brain takes over once more. I’m at the stage where I can hear pieces of my TL and understand it, without really knowing exactly what words were used. My thinking is that if that’s even possible - and it is - then surely it’s a huge clue that sound recognition is the key to acquistion, and any form of analysis is potentially useless (in the big picture).
I’ve also noticed that the faster I read the more overall understanding I get from each sentence. When I read slowly, paying close attention in an attempt to analyse the language, my ability to infer overall meaning is reduced.
The difficult thing is to get your brain to stop immeditately switching to ‘analytical mode’ each and every time it detects TL.
As a sidenote, the more level-appropriate content I consume, the more I’m able to shut off the analytical brain, which would fall in line with what Krashen’s ‘comprehensible input’ theory tells us.
Anyway, I hope I’m making sense. Here’s the article: