STEMM: Self-learning With Speech-text Manifold Mixup For Speech Translation | Awesome LLM Papers Contribute to Awesome LLM Papers

STEMM: Self-learning With Speech-text Manifold Mixup For Speech Translation

Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang . Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022 – 67 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Uncategorized

How to learn a better speech representation for end-to-end speech-to-text translation (ST) with limited labeled data? Existing techniques often attempt to transfer powerful machine translation (MT) capabilities to ST, but neglect the representation discrepancy across modalities. In this paper, we propose the Speech-TExt Manifold Mixup (STEMM) method to calibrate such discrepancy. Specifically, we mix up the representation sequences of different modalities, and take both unimodal speech sequences and multimodal mixed sequences as input to the translation model in parallel, and regularize their output predictions with a self-learning framework. Experiments on MuST-C speech translation benchmark and further analysis show that our method effectively alleviates the cross-modal representation discrepancy, and achieves significant improvements over a strong baseline on eight translation directions.

Similar Work