@jeff_lindqvist: That’s right, the content has to be synchronized with the audio in advance. It’s a manual process, but for English content, I did find some open source speech processing tools to help me generate word-level synchronizations… it was pretty darn complicated, though.
As far as altering the speed of the audio, the only problem there is that being a web-app, the functionality in the browser is limited. However, to get around that, I did experiment with pre-generating some content at a slower speed. The first chapter is at 75% speed, the next is at 50% speed: http://library.dinglabs.com/books/10
Jim has the full support of LingQ in the work that he is doing. I invited him to use our original content for this purpose. I think this is a considerable value added facility for English learners. I am looking at ways that LingQ can help him and help the LingQ community at the same time.
We are happy to cooperate with others who are offering valuable language learning services such as Rhinospike, and others including some of the bloggers and podcasters who provide content to us. This is a great way to support each other and provide better learning opportunities for all.
@commasplice: Thanks, sincerely, for the Coke recommendation.
I’m sorry to report that I haven’t prepared any synchronized German content yet. Steve Kaufmann has granted permission to prepare LingQ content for playback in the DingLabs Reader. I have some LingQ material there: http://library.dinglabs.com/
Primarily ChineseLingQ, but also some English, Spanish and French content.
If any LingQers would like to try playing with their content in this way, the free tool to use is Transcriber, I have a tutorial posted here: ding labs: How to Prepare Content - Part 1
I liked very much what you have done! I especially like it that the the users can re-start the player from any clicked word!
Some time ago I also experimented with a kind of “karaoke” for reading-while-listening. (I think, I’ve even found an unusual karaoke style, the one that remained good and precise even for a very fast speech. The usual “word-by-word” jumps, of the highlite, felt jerky to me. My eyes would have soon become tired to follow along the natural (i.e. fast and irregular) speech. With such a speech, developers often end up with a phrase-by-phrase karaoke. As e.g. you have done in your examples with SpanishLinQ. I was looking for a way to reconcile that fine syncronization, like in the word-after-word method, with the ease for the eyes, like in the phrase-after-phrase method).
I had started with the manual sound-and-text synchronization, and indulged in speech recognition. However the company named Automaticsync Technology was much better in that, and I end up with relying on their service. Then I happened to quit my Karaoke experiments, and later switched to video.
Still I am intersted in all this. Are you familiar with what Google has offfered, or is going to offer, to help us sync speech with the transcripts? What are those speech processing tools that you have mentioned; have they helped you?
It would be great if we could talk using, say, skype. What do you think about it. Do you learn a language on LingQ? I am going to add you to my lingQ friends. You should then see my skype name.
Hi Ilya, I do all of my text-audio alignment manually, using the open source tool, Transcriber. The only exception is for English content, I have setup a workflow to generate word-level synchronizations, using P2FA. It still requires manual corrections though, using Transcriber.
I have uploaded videos for “The Linguist” audiobook:
I’m experimenting with a slightly different playback style. It seems easier on the eyes, and mildly hypnotic as you follow along with each word as it is spoken.
I’m grateful for any feedback from fellow learners!
Thanks,
Jim