Unable to import eBook

I have an eBook I’ve tried importing, but it keeps failing. I initially bought the book on Amazon and have tried importing the converted epub and txt files, but neither works. I also tried renaming the file in case there was an issue there, but no luck.

It starts off like it’s going to import and creates the first part, but never goes any further and when I eventually try to open it, I get a generic import failed message.

I’ve successfully imported other eBooks I’ve purchased from Amazon without an issue.

In the image, you can see both of the failed imports: one for the epub version and one for the txt version.

Any ideas what I should try next?

1 Like

1 Like

Because it affects the text file as well the issue is likely related to encoding of the file.

your best best is to email the file to LingQ support, that way they might be able to include your specific case for future imports.

there are a couple things you could try yourself.

  1. Try save the file with UTF-8 encoding.

  2. If you want a janky way to get it done, you need to remove any special characters in the text . Could try here.

3 Likes

Thanks for the response @roosterburton. Looks like I’ll just need to email support. I tried both of your suggestions, but am still having the same issue.

I also tried importing a different book and the same thing happened. It was also purchased from Amazon and converted with Calibre.

I was able to import a different book via the same process just last week. :confused:

1 Like

Does this tool work with hidden special characters as well? Could you import an entire ebook in that online tool?

I often find that when I convert from Epubor, there are special characters that are not visible.

1 Like

I can’t comment on its effectiveness. I just googled and clicked the first option.

I use this specific replacement for importing from Viki, because there are so many failed imports. It may help with your case too.

const text = 'yourimporttexthere';
const newText = removeUnwantedCharacters(text);
console.log(newText);

    function removeUnwantedCharacters(text) {
        // Insert spaces before each uppercase letter that follows a lowercase letter
        let cleanedText = text.replace(/([a-z])([A-Z])/g, '$1 $2');
    
        // Attempt to also separate consecutive uppercase letters used in acronyms or nouns
        cleanedText = cleanedText.replace(/([A-Z])([A-Z][a-z])/g, '$1 $2');
    
        // Convert HTML entities to their corresponding characters
        const htmlEntities = { 
            '&': '&', 
            '&lt;': '<', 
            '&gt;': '>', 
            '&quot;': '"', 
            '&#39;': "'"
        };
        for (const [entity, replacement] of Object.entries(htmlEntities)) {
            cleanedText = cleanedText.replace(new RegExp(entity, 'g'), replacement);
        }
    
        // Remove emojis and some special characters, specific ranges excluded
        cleanedText = cleanedText.replace(/[\u{1F600}-\u{1F64F}\u{1F300}-\u{1F5FF}\u{1F680}-\u{1F6FF}\u{1F700}-\u{1F77F}\u{1F780}-\u{1F7FF}\u{1F800}-\u{1F8FF}\u{1F900}-\u{1F9FF}\u{1FA00}-\u{1FA6F}\u{1FA70}-\u{1FAFF}\u{2600}-\u{26FF}\u{2700}-\u{27BF}]/gu, "");
    
        return cleanedText;
    }
2 Likes

It’s frustrating that I’ve received no response here from Lingq staff and only short responses via email asking a question that I had already answered in my original message.

1 Like

Probably something like this could/should be implemented directly into LingQ as well. Maybe they have something similar? I sent them a couple of file some time ago, to check on those symbols, but didn’t receive any reply back yet.

1 Like

You can send me the files if you like, i’ll run that fix on them and try upload

1 Like

I got the same message trying to import a YouTube video using the browser plugin. I’m guessing there’s an issue on LingQ’s end.

1 Like

I don’t use them anymore, I’ve finished those lessons but I’ll see with the next one I’ll encounter if the problem persist. In any case, they were ebook converted with Epubor, so I suppose the encoding they use it’s always the same.

1 Like

Sent you a DM. Thanks!

1 Like

For anyone who might come across this thread in the future: After @roosterburton tested the files, I realized importing failed during the “Optimize word splitting using AI” step. Going into the settings and turning that off by default allows me to import my ebooks.

I’m not sure what the difference in word-splitting quality is between the standard and AI-enhanced versions, however.

2 Likes

Where are these settings? I don’t see this option in my webapp.

1 Like

This is only available for certain languages such as Japanese or Chinese.

2 Likes