Direct Speech-to-speech Translation With A Sequence-to-sequence Model | Awesome LLM Papers Contribute to Awesome LLM Papers

Direct Speech-to-speech Translation With A Sequence-to-sequence Model

Ye Jia, Ron J. Weiss, Fadi Biadsy, Wolfgang MacHerey, Melvin Johnson, Zhifeng Chen, Yonghui Wu . Interspeech 2019 2019 – 152 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Interspeech Uncategorized

We present an attention-based sequence-to-sequence neural network which can directly translate speech from one language into speech in another language, without relying on an intermediate text representation. The network is trained end-to-end, learning to map speech spectrograms into target spectrograms in another language, corresponding to the translated content (in a different canonical voice). We further demonstrate the ability to synthesize translated speech using the voice of the source speaker. We conduct experiments on two Spanish-to-English speech translation datasets, and find that the proposed model slightly underperforms a baseline cascade of a direct speech-to-text translation model and a text-to-speech synthesis model, demonstrating the feasibility of the approach on this very challenging task.

Similar Work