Domain Adaptation Of Neural Machine Translation By Lexicon Induction | Awesome LLM Papers Contribute to Awesome LLM Papers

Domain Adaptation Of Neural Machine Translation By Lexicon Induction

Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell . Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019 – 65 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Uncategorized

It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift. In this paper, we argue that this is a dual effect of the highly lexicalized nature of NMT, resulting in failure for sentences with large numbers of unknown words, and lack of supervision for domain-specific words. To remedy this problem, we propose an unsupervised adaptation method which fine-tunes a pre-trained out-of-domain NMT model using a pseudo-in-domain corpus. Specifically, we perform lexicon induction to extract an in-domain lexicon, and construct a pseudo-parallel in-domain corpus by performing word-for-word back-translation of monolingual in-domain target sentences. In five domains over twenty pairwise adaptation settings and two model architectures, our method achieves consistent improvements without using any in-domain parallel sentences, improving up to 14 BLEU over unadapted models, and up to 2 BLEU over strong back-translation baselines.

Similar Work