Enhancing Machine Translation With Dependency-aware Self-attention | Awesome LLM Papers Contribute to Awesome LLM Papers

Enhancing Machine Translation With Dependency-aware Self-attention

Emanuele Bugliarello, Naoaki Okazaki . Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020 – 70 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Model Architecture

Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism. In this work, we investigate different approaches to incorporate syntactic knowledge in the Transformer model and also propose a novel, parameter-free, dependency-aware self-attention mechanism that improves its translation quality, especially for long sentences and in low-resource scenarios. We show the efficacy of each approach on WMT English-German and English-Turkish, and WAT English-Japanese translation tasks.

Similar Work