Code-switching For Enhancing NMT With Pre-specified Translation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Code-switching For Enhancing NMT With Pre-specified Translation

Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang . Arxiv 2019 – 40 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Image Text Integration Multimodal Semantic Representation Training Techniques Visual Contextualization

Leveraging user-provided translation to constrain NMT has practical significance. Existing methods can be classified into two main categories, namely the use of placeholder tags for lexicon words and the use of hard constraints during decoding. Both methods can hurt translation fidelity for various reasons. We investigate a data augmentation method, making code-switched training data by replacing source phrases with their target translations. Our method does not change the MNT model or decoding algorithm, allowing the model to learn lexicon translations by copying source-side target words. Extensive experiments show that our method achieves consistent improvements over existing approaches, improving translation of constrained words without hurting unconstrained words.

Similar Work