Coverage Embedding Models For Neural Machine Translation | Awesome LLM Papers Contribute to Awesome LLM Papers

Coverage Embedding Models For Neural Machine Translation

Haitao Mi, Baskaran Sankaran, Zhiguo Wang, Abe Ittycheriah . Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016 – 137 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updating it with neural networks as the translation goes. Experiments on the large-scale Chinese-to-English task show that our enhanced model improves the translation quality significantly on various test sets over the strong large vocabulary NMT system.

Similar Work