Sentence Boundary Augmentation For Neural Machine Translation Robustness | Awesome LLM Papers Add your paper to Awesome LLM Papers

Sentence Boundary Augmentation For Neural Machine Translation Robustness

Daniel Li, Te I, Naveen Arivazhagan, Colin Cherry, Dirk Padfield . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 490 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Evaluation Interdisciplinary Approaches Neural Machine Translation Security Training Techniques

Neural Machine Translation (NMT) models have demonstrated strong state of the art performance on translation tasks where well-formed training and evaluation data are provided, but they remain sensitive to inputs that include errors of various types. Specifically, in the context of long-form speech translation systems, where the input transcripts come from Automatic Speech Recognition (ASR), the NMT models have to handle errors including phoneme substitutions, grammatical structure, and sentence boundaries, all of which pose challenges to NMT robustness. Through in-depth error analysis, we show that sentence boundary segmentation has the largest impact on quality, and we develop a simple data augmentation strategy to improve segmentation robustness.

Similar Work