Very Deep Transformers For Neural Machine Translation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Very Deep Transformers For Neural Machine Translation

Xiaodong Liu, Kevin Duh, Liyuan Liu, Jianfeng Gao . Arxiv 2020 – 73 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Has Code Interdisciplinary Approaches Model Architecture Neural Machine Translation Training Techniques

We explore the application of very deep Transformer models for Neural Machine Translation (NMT). Using a simple yet effective initialization technique that stabilizes training, we show that it is feasible to build standard Transformer-based models with up to 60 encoder layers and 12 decoder layers. These deep models outperform their baseline 6-layer counterparts by as much as 2.5 BLEU, and achieve new state-of-the-art benchmark results on WMT14 English-French (43.8 BLEU and 46.4 BLEU with back-translation) and WMT14 English-German (30.1 BLEU).The code and trained models will be publicly available at: https://github.com/namisan/exdeep-nmt.

Similar Work