Q: How to punctuate my subtitles for automated dubbing?
Here are a few tips on how to improve the pronunciation of our digitized voices using simple punctuation, all by yourself:
- Prepare your text ahead – Before running, review your text within a subtitles editor (like Subtitles Edit) and make sure that your text contains proper punctuation. A period at the end of each sentence, a comma every 6-8 words, a question mark or an exclamation mark could add a lot of character to the digitized voice
- There is more to punctuation than . , ! and ? – Each character might have a different affect on timing in different voices. Some voices include pauses (each with different length) for – ; : ( ) ” ‘ and more (and in certain languages, e.g Arabic, Japanese, Korean, Hindi, etc. there are additional characters that are used for . and might deliver different pauses and emphasis). Furthermore, when people are speaking slowly, you might want to add extra punctuation (even if grammatically they are incorrect), in order to mimic the speaker’s dictation pace and pauses (or even split the subtitle to several ones, to get the timing of the pauses right – be sure to be aware that our engine considers a gap of less than 1 second (between 2 subtitles) as one consolidated subtitle, unless you end the first one with either a period, a question mark or exclamation mark), so use the English version of the punctuation if needed
- Use the Dubbing Editor’s interactive preview – Within the dubbing editor, you can test your sentences syntax and try playing with them to get the best outcome. Once you modify the text in a subtitle, you can hit the play button on it’s right to instantly hear a revised version. Although the interactive preview plays it in a default speed (as acceleration must be calculated for the whole movie when a re-run is done), for this purpose, it works perfectly
Remember: Proper punctuation makes the voices sound more human. It doesn’t have to be grammatically correct, however it does need to sound like humans, and as we all know, humans do not normally speak over 8-12 words without taking a short pause, to breeze, or to make an emphasis.
E.g. there is a difference between: “A woman without her man is nothing”, and “A woman: without her, man is nothing.”
Make that difference clear for the digitized voice. It is just a software – it does not understand the content, unless you make sense of it for the voice.
As always, if that wasn’t clear enough – contact us and ask us directly. We’re here to help and improve. Please be as specific as possible (name the files involved, the subtitle line in question, the error you may found and the proposed change, and we’ll be in touch).
Q: Having dubbing synchronization problems in your dubbed video?
First, we’re here to help.
Here are a few tips
Here are a few ways to improve your result by yourself:
- Use VideoDubber’s Dubbing Editor – We created a special utility for you to help you solve these issues by yourself. Once you get your preview, and before you commit and purchase the result, you can review all of the spoken sentences within the subtitles editor, improving both timing and pronunciation, with it’s user friendly interface. Experiment and experience the results immediately. Remember, Only once you’re happy with your dubbed video and you want to download the final result, only then you’ll be requested to make the purchase!
- Accurate subtitles’ timing is key – Make sure your subtitles are synced well. SubtitleEdit is a great FREE tool to fine tune your subtitles’ timings, before you upload them. Once you uploaded them, use VideoDubber’s Subtitles Editor to do the fine tuning needed
- The voice speaks too fast – We know this is an issue when moving from a short language (e.g. English), into a long language (e.g. German). Make sure you do not put too many words within one subtitle’s timing (a reasonable pace would be 2 words per second, and remember that ‘A’ and ‘the’ are words as well). Accelerating to over 25% beyond that pace will make the voice’s pronunciation hard to understand. Our engine can compensate if the next subtitles are less crowded, or there’s a pause between the subtitles, but if all of your subtitles are over crowded, and there are no pauses, we heartily suggest that you edit your subtitles and minimize the amount of text in them (while trying to capture the essence of their message, instead of translating every last word)
- The original actor’s voice is back to full volume after the synthetic voice has finished the translated sentence – Make sure the subtitle’s time covers ALL of the original actor’s voice time, and the engine will diminish the original actor’s voice within this time frame
- The voice jumps back and forth in his speaking pace – See the first issue in this list. Try to correct the timing within your subtitles, i.e. try to build balanced timings for the subtitles to match a reasonable steady pace (based on the ‘2 words per second’ formula), with looser relations to the original timings within the original subtitles file, in order to create a better experience to the viewer
- There is more to punctuation than . ! and ? – Each character might have a different affect on timing in different voices. Some voices include pauses (each with different length) for – ; : ( ) ” ‘ and more (and in certain languages, e.g Arabic, Japanese, Korean, Hindi, etc. there are additional characters that are used for . and might deliver different pauses and emphasis). Furthermore, when people are speaking slowly, you might want to add extra punctuation (even if grammatically they are incorrect), in order to mimic the speaker’s dictation pace and pauses (or even split the subtitle to several ones, to get the timing of the pauses right – be sure to be aware that our engine considers a gap of less than 1 second (between 2 subtitles) as one consolidated subtitle, unless you end the first one with either a period, a question mark or exclamation mark), so use the English version of the punctuation if needed
If that’s not enough – contact us and let us know. We’re here to help and improve. Please be as specific as possible (name the files involved, the subtitle line in question, the error you found and the proposed change, and we’ll be in touch).
Q: Having pronunciation issues in your dubbed video?
First, we’re here to help.
Our service is using a cutting-edge technology and still, it isn’t perfect (but human dubbers can also make mistakes), so we improve it all the time, and your feedback is a crucial part of this process.
Teaching voices how to improve their pronunciation, is like teaching children how to read. It takes some effort and time to train it, but fortunately, once our voices learn how to pronounce something, they do not forget it, so the effort is worth the while.
On the meantime, what can I do?
Well, there are a few ways to improve your result by yourself:
- Use VideoDubber’s Dubbing Editor – We created a special utility for you to help you solve these issues by yourself. Once you get your preview, and before you commit and purchase the result, you can review all of the spoken sentences within the subtitles editor, improving both timing and pronunciation, with it’s user friendly interface. Experiment and experience the results immediately. Remember, Only once you’re happy with your dubbed video and you want to download the final result, only then you’ll be requested to make the purchase!
- Some words aren’t pronounced accurate enough – Test different lettering (in some English voices, ‘Apel’ sounds better than ‘Apple’) within your subtitles. It’s not a spelling bee, so play with it and use what works best: Try playing with punctuation (to split a word differently), to write the text phonetically, and even write it in English (most voices know how to read English with their own native accent)
- Some sentences have the wrong emphasis – Try different wordings. In most languages you can say almost the same thing, in many different ways… In general, shorter sentences are pronounced more accurately (as the linguistic models is simpler for the engine). Test different punctuations (,.;:-“‘!? etc. might have different effects on the way the sentence is spoken). The voices use different linguistic models when the punctuation is different, and like the ‘Apel vs. Apple’ case, it’s not a grammar test, so try to play with it to get it right
- Not all voices were created equally – Test another voice (if we have a similar one, in language/gender that you can work with, within our voices library). It doesn’t cost extra to try…
If that’s not enough – contact us and let us know. Please be as specific as possible (name the files involved, the subtitle line in question, the error you found and the proposed change, and we’ll be in touch).
The voices technology team can improve the next release of the voice (it’s usually a quarterly update), and some of your suggestions (e.g. correcting a specific name pronunciation) could be introduced even earlier, using our special lexicon modification tools, especially for you.
Q: What type of content should I dub using VideoDubber’s automated dubbing platform?
In order to provide accessibility to viewers that do not know the language of the content, one could use VideoDubber’s platform to dub the video content to their language.
The up-to-date list of the supported languages is growing all the time.
There are many benefits to using automated dubbing, amongst them the speed of the service, and the quality of the end results.
Still, since the current platform doesn’t support emotions yet, emotional genres, e.g. children movies and TV programs aren’t a natural choice for automated dubbing.
There are additional types of content that can not be automatically dubbed at this time, e.g. singing.
We are testing additional technologies that might be able to provide solutions for these needs, but on the mean time, our service is focused on cultures and content genres that aren’t dependent on this type of need.
Thus, genres like documentaries, life-style, travel, news, training, education and even talk shows, are a great fit to automated dubbing.
Certain cultures, e.g. Russian, are less accustomed to emotional dubbing, and thus can consume automated dubbing for almost any type of content.
Check out our video gallery for demos that were created using our digitized voices on the VideoDubber’s platform.
Q: How can I create subtitles for my project?
Subtitles creation is fairly simple. Here’s a nice article in eHow about creating .SRT files.
We wrote a short post about subtitling using .SRT format – Be sure to read this guide to learn more.
Last but not least, we recently added a new feature that enables you to convert straight forward text into subtitles automatically, meaning you can now use our platform to generate simple narration by simply pasting text, disregarding the accurate timing required to create subtitle files:
It’s still in beta, and is mostly useful for narration purposes, but you can try using it as a starting point, should you find .SRT files creation cumbersome.