By user2248702


2019-03-24 04:50:03 8 Comments

I'm generating speech through Google Cloud's text-to-speech API and I'd like to highlight words as they are spoken.

Is there a way of getting timestamps for spoken words or sentences?

1 comments

@user2248702 2020-05-01 05:17:01

This question seems to have gotten quite popular so I thought I'd share what I ended up doing. This method will probably only work with English or similar languages.

I first split text on any punctuation that causes a break in speaking. Each "sentence" is converted to speech separately. The resulting audio files have a seemingly random amount of silence at the end which needs to be removed before joining them, this can be done with the FFmpeg silencedetect filter. You can then join the audio files with an appropriate gap. Approximate word timestamps can be linearly interpolated within the sentences.

Related Questions

Sponsored Content

1 Answered Questions

Google Cloud Text-to-Speech WaveNet API Character conversion rate

1 Answered Questions

[SOLVED] How to get smooth text-to-speech of sentence with utterance ids

1 Answered Questions

[SOLVED] Speech Synthesis API Highlight words as they are spoken

Sponsored Content