Saliency-driven Word Alignment Interpretation For Neural Machine Translation | Awesome LLM Papers Contribute to Awesome LLM Papers

Saliency-driven Word Alignment Interpretation For Neural Machine Translation

Shuoyang Ding, Hainan Xu, Philipp Koehn . Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers) 2019 – 60 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized WMT

Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.

Similar Work