VTML markup in user dictionary - voicetext text-to-speech
Hello,
I am working with Captivate 10 and the "integrated" voicetext text-to-speech. I have used the voicetext user dictionary already to improve/change pronounciation of words.
I have some words where I would like to speed up the TTS and I would like to do this for every occurence of these words in the TTS. Obviously I could extend the TTS texts accordingly with the respective markup, e.g. <vtml_speed value=150>myword</vtml_speed>, but I prefer to have the correct text in the TTS Management, e.g. for possible translations without disturbance (i.e. the vtml markup). Addtionally it would be much easier to write the TTS if the vtml-"tagging" that has to be done repeatedly could be done "centrally".
A possible solution could have been to add these words to the user dictionary but unfortunately the VTEditor resp. the UserDictionary does not accept texts as a target that include the vtml markup. (certainly 😉 I have tried to manipulate the userdict_eng.csv of the voice I use directy but the voicetext engine simply ignores these entries).
Does anyone have an suggestion/workaround to solve this issue? As said my idea is to keep the correct "vtml-free" (at least as much as possible) texts in the TTS and would like manipulate and manage the pronounciation/speed/pauses in the user dictionary centrally. (and I do not want to write separate texts in the slide comments, i.e. one comment with correct texts and one comment for the TTS including the vtml-markup).
Thanks in advance for your help!
