I’ve used LingQ for a couple of hours today and encountered the following bugs:

  1. The words read in lessons are vastly increased, and any time I completed a lesson the “read counter” was at between 2.5 - 3.5 instead of the usual 1.1 - 1.3 despite having spent roughly the same amount of time in each lesson.

  2. Many words weren’t properly highlighted / recognized as unknown or known words. I.e. words that should be yellow or blue were white. When that happened there were often issues with turning the pages.

  3. Directly linked to #2 - the dictionary wasn’t working properly on any of the words that were but shouldn’t have been white.

  4. Issues with loading lessons - when you complete a lesson sometimes the next one doesn’t load (json error), and you get problems when you go to the previous lesson, e.g. you can’t turn pages or complete it again.

For gods sake, stop rolling out new features without testing them first. Whenever you roll out new updates (or really, randomly throughout the week) it introduces new bugs and breaks things down, usually the very same things that make LingQ unusable for a while. This is very amateurish by the developers and tiring for people who paid for access to this app - the live version that you serve your customers is not the testing ground. Thanks.

// edit - this is all on the web (Chrome) version.


I had similar problems today. I use Firefox.


I have the same issue with the word count. Also on Chrome. Another problem I have is that the pop-up for LingQs cannot be opened, closed, and then reopened. If i close the pop-up, I have to open one for a different LingQ before re-opening the original one. It’s very tedious.


When navigating to a phrase by using the right arrow key in sentence mode, the site becomes unresponsive to keyboard input. The meaning doesn’t appear either. A reload is required. The MS Edge browser console shows the following error:


   Uncaught TypeError: Cannot read properties of null (reading '1')
at d (main-b4eab4b768c5ceb4e7f4.js:2:4706325)
at e.value (main-b4eab4b768c5ceb4e7f4.js:2:4710843)
at e.value (main-b4eab4b768c5ceb4e7f4.js:2:4692688)
at main-b4eab4b768c5ceb4e7f4.js:2:4450836
at a.selectWordOrPhraseCard (main-b4eab4b768c5ceb4e7f4.js:2:4451054)
at t._onNext (main-b4eab4b768c5ceb4e7f4.js:2:4453467)
at t.next (vendor-9c30832d69b1feda5849.js:2:2875753)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2875259)
at t.<anonymous> (vendor-9c30832d69b1feda5849.js:2:2858084)
at n.next (vendor-9c30832d69b1feda5849.js:2:2932776)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2875259)
at n.next (vendor-9c30832d69b1feda5849.js:2:2914163)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2875259)
at n.<anonymous> (vendor-9c30832d69b1feda5849.js:2:2858084)
at n.next (vendor-9c30832d69b1feda5849.js:2:2932776)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2875259)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2934277)
at t.<anonymous> (vendor-9c30832d69b1feda5849.js:2:2858084)
at n.next (vendor-9c30832d69b1feda5849.js:2:2932776)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2875259)
at t.<anonymous> (vendor-9c30832d69b1feda5849.js:2:2858084)
at n.next (vendor-9c30832d69b1feda5849.js:2:2932776)
at t.onNext (vendor-9c30832d69b1feda5849.js:2:2875259)
at HTMLDocument.<anonymous> (vendor-9c30832d69b1feda5849.js:2:2916294)
at HTMLDocument.d (raven.js:351:29)


1 Like


  1. The sidebar behaves in strange ways, the resizing functionality is new I assume. Couldn’t delete a newly added definition:
    sidebar - YouTube

  2. When trying to highlight a word in sentence mode many times the system just turns the page instead of highlighting a yellow / blue word when using the right arrow key. Goes like this: sentence 1 visible (no highlight, nothing selected) → press right arrow key → sentence 2 visible (contains yellow / blue word) → press right arrow key → sentence 3 visible (skipping word in previous sentence). Expected behavior: highlight the first yellow / blue word in sentence 2, so that dictionary definition becomes visible.

  3. Unable to finish lesson, probably too impatient:
    finish lesson - YouTube

1 Like

isBlue is not defined"

1 Like

Firefox: And the list of New Words in the Vocabulary of a lesson (that you access from within the lesson) is different than that on Android. There are many more words on Android than on Firefox. (Left is Android, right is Firefox.)

1 Like

Sorry about that, everyone. We will push a fix shortly!


As far as the inflated known words count, I’ve noticed that if I go through a lesson “perfectly”, without ever viewing a page more than once, the times read counter comes out to exactly 3.0x every time, so there’s some factor that keeps incorrectly multiplying by a constant amount.


I tend not to go back to previous pages (this time I’ve been inspired to try some things out because of the bugs) but as a rather slow reader that typically gets ~1.0-1.2x it would check out that the counter gets multiplied by 3 for me as well.

1 Like

Well, is there any update on the situation? Over 30 hours have passed since my post and LingQ is still largely unusable (unless you enjoy constantly refreshing the page and not being able to turn the page half the time) and working even worse today than yesterday.


Who are we to complain about minor decisions they make when those decisions don’t affect the core functionality of LingQ - mass reading and listening. ©

Wait please, they’re iNtEgRatINg cHaTgPt %)

You can turn your white words back to blue with the mouse while waiting. And make a backup of your imported materials, just in case.

1 Like

I have backups! Or rather, it’s mostly ebooks that I import, and some articles I’ve already went through. I’m definitely not trusting LingQ as my main storage :wink:

You can turn your white words back to blue with the mouse while waiting.

It’d be hassle. I tend to read in 45-90 min bursts at roughly 15-20% new words, so it’s just a lot of words. Better to spend my time anywhere else than LingQ when this is happening.

Wait please, they’re iNtEgRatINg cHaTgPt %)

God, I hope not :smiley: But I googled that quote (interesting mindset…) and saw the discussion. It’s actually funny how quickly that backfired.


– God, I hope not

It seems they are :smiley:

After one of these updates, I lost about 70 out of 80 manually imported and edited podcasts from one course, and some of the other courses were lost completely. Then the 5.0 version came out with the “better backend”…

– For gods sake, stop rolling out new features without testing them first.

Absolutely. I’d even say “drop 80% of the features for the good”. Reader, dictionaries, import/export and stats are more than enough for their team to maintain .

1 Like

I appreciate the reference to my (now famous, I guess) statement. Surely you don’t think that I’m actually okay with this either? Of course I wish they would test their features better so that we stop getting recurring bugs re-appearing even after they were previously fixed. I’ve said that many times in the past. When issue like this keep happening regularly, there are serious problems in their development process (whether that’s not having good bug tracking, not properly testing on a test server, or something else).

I said that quote in the context of the required known words for each level being increased, and I still stand by that - if they want to increase/decrease those levels, in my opinion, that’s fine. It’s not a bug, and it doesn’t take away from the main activities on LingQ. But that’s just my opinion and I don’t expect everyone to agree, and that’s not what this thread is about anyways.


(I can’t reply to your latest post)

I’d even say “drop 80% of the features for the good”. Reader, dictionaries, import/export and stats are more than enough for their team to maintain .

Yeah. That’s why I was hoping they wouldn’t teach any of the AI stuff. There’re so many features here already that aren’t really necessary while the core stuff is regularly breaks down or doesn’t work. I mean I haven’t counted the days, but I’d estimate that between 1/4 and 1/3 of the days I’ve used LingQ had some disruptions.


Amazing how everything gets f***ed up by an update … Shouldn’t that be tested BEFORE it gets rolled out and makes the hole platform unusable?

Besides that: shouldn’t fixing the bugs that already exist and the overall usability be the main concern instead of implementing fancy 's**t nobody needs?

1 Like

The issues of LingQ are manifold. Frankly I have never used a software product that had so many bugs. I don’t want to dramatize, because normally I can find workarounds, but it’s not a good look nonetheless. That’s also why I have never personally recommended LingQ to anyone.
It is concerning that almost every time they introduce new features, basic functionality breaks. I don’t think this is simply due to a lack of testing, it could be an indication that the codebase might be a little fragile. I’m certainly no expert but the Javascript codebase here is both sizable and complex looking.
What is interesting to note is that the different platforms are developed in very different ways, for example the iOS / iPadOS app has generally a far superior software quality. I have used the app for well over 1000 hours and can count the times I experienced bugs at the fingers of one hand and those were fixed in a timely manner as well. The website however has been struggling since LingQ 5 was introduced. I guess it would be great if the web developers could take some inspiration from the iOS developer.
Generally one would expect them to make changes to their processes after having a bodged release. But unfortunately the same pattern repeats over and over again.
Of course it is known that the situation in Ukraine has exacerbated the software problems at LingQ, since that’s where the web developers are located. This is a tough situation with no obvious solution.

Regarding the use of AI, maybe LingQ can hook their developers up with a subscription to GitHub copilot (GitHub Copilot · Your AI pair programmer · GitHub)? I’ve heard good things about it, especially regarding JS and Python.


Don’t take it personally, I could have cited myself as well:

“On the bright side, the main LingQ concept, that is massive reading and listening, is well-supported.”

Just “who are we” has stuck with me as a kind of slogan, and pops up in my mind each time I see the issues on LingQ. Like, why the Android app looks and feels bad since 4.0, but who are we… Why the dictation test is a “dictation” test if it’s a quizz on the Android app, but who are we… Why there’s a scam mechanics in the “cancel membership” section and Zoran manually checks your account and “payment details”, but… That feeling of “who are we” following each update.

As you said in that comment:

At the end of the day, LingQ knows how their users use the site, and are developing for the masses

More likely they don’t care about users and how the users use the site. As if they’re making some movements to show Steve, like hey, look sir, we’re doing something and it’s very important. Maybe the sir himself doesn’t care, though.

And it’s stays ‘amateurish’ since my signing up back in 2020 and no kind feedback from users hasn’t made any difference.

1 Like