Quality-aware Decoding For Neural Machine Translation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Quality-aware Decoding For Neural Machine Translation

Patrick Fernandes, AntΓ³nio Farinhas, Ricardo Rei, JosΓ© G. C. de Souza, Perez Ogayo, Graham Neubig, AndrΓ© F. T. Martins . Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2022 – 40 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Datasets Evaluation Has Code Interdisciplinary Approaches NAACL Neural Machine Translation

Despite the progress in machine translation quality estimation and evaluation in the last years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers around finding the most probable translation according to the model (MAP decoding), approximated with beam search. In this paper, we bring together these two lines of research and propose quality-aware decoding for NMT, by leveraging recent breakthroughs in reference-free and reference-based MT evaluation through various inference methods like (N)-best reranking and minimum Bayes risk decoding. We perform an extensive comparison of various possible candidate generation and ranking methods across four datasets and two model classes and find that quality-aware decoding consistently outperforms MAP-based decoding according both to state-of-the-art automatic metrics (COMET and BLEURT) and to human assessments. Our code is available at https://github.com/deep-spin/qaware-decode.

Similar Work