An Operation Sequence Model For Explainable Neural Machine Translation | Awesome LLM Papers Add your paper to Awesome LLM Papers

An Operation Sequence Model For Explainable Neural Machine Translation

Felix Stahlberg, Danielle Saunders, Bill Byrne . Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP 2018 – 51 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Interdisciplinary Approaches Interpretability Model Architecture Neural Machine Translation Variational Autoencoders

We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself. We present a novel approach to NMT which generates the target sentence by monotonically walking through the source sentence. Word reordering is modeled by operations which allow setting markers in the target sentence and move a target-side write head between those markers. In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability. Our technique can outperform a plain text system in terms of BLEU score under the recent Transformer architecture on Japanese-English and Portuguese-English, and is within 0.5 BLEU difference on Spanish-English.

Similar Work