Technical Issue: Inaccurate AI Word Translations and Contextual Parsing

Hello,
I’m reporting a recurring technical issue involving LingQ’s AI-powered word translations.

While LingQ is the most advanced platform available for extensive reading and vocabulary acquisition, the current AI model used for single-word lookup often generates contextually inaccurate or misleading translations—especially in literary texts.

Examples:

“gibbet” translated as “gallows”, although in context it refers to a suspended iron cage (medieval punishment), not an execution structure.

“wriggler” translated as “contortionist”, while the correct contextual meaning is “a dying prisoner still able to move slightly.”

These errors indicate that the current word-level translation system:

  1. Does not reliably analyze broader narrative context

  2. Defaults to literal or statistically common dictionary senses

  3. Struggles with archaic, figurative, or genre-specific vocabulary

  4. Does not perform semantic disambiguation before outputting a translation

In contrast, sentence-level AI translation within LingQ often performs significantly better, suggesting that the underlying word-translation model is outdated or insufficient for contextual disambiguation.

Questions for the development team:

  1. Are there plans to upgrade the AI/dictionary backend to a more modern contextual model (e.g., GPT-based or equivalent)?

  2. Will LingQ eventually unify single-word and sentence-level translation under the same contextual AI system?

  3. Are there recommended workarounds or official integrations to allow users to pull contextual definitions from external LLM APIs?

Enhancing the accuracy of word-level AI translation would dramatically improve the user experience, especially for learners reading complex or literary materials where semantic nuance is essential.

Thank you for your attention.