Correcting Length Bias In Neural Machine Translation | Awesome LLM Papers Contribute to Awesome LLM Papers

Correcting Length Bias In Neural Machine Translation

Kenton Murray, David Chiang . Proceedings of the Third Conference on Machine Translation: Research Papers 2018 – 141 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized WMT

We study two problems in neural machine translation (NMT). First, in beam search, whereas a wider beam should in principle help translation, it often hurts NMT. Second, NMT has a tendency to produce translations that are too short. Here, we argue that these problems are closely related and both rooted in label bias. We show that correcting the brevity problem almost eliminates the beam problem; we compare some commonly-used methods for doing this, finding that a simple per-word reward works well; and we introduce a simple and quick way to tune this reward using the perceptron algorithm.

Similar Work