Evaluating The Cross-lingual Effectiveness Of Massively Multilingual Neural Machine Translation | Awesome LLM Papers Contribute to Awesome LLM Papers

Evaluating The Cross-lingual Effectiveness Of Massively Multilingual Neural Machine Translation

Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman . Proceedings of the AAAI Conference on Artificial Intelligence 2020 – 63 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Fine Tuning Model Architecture

The recently proposed massively multilingual neural machine translation (NMT) system has been shown to be capable of translating over 100 languages to and from English within a single model. Its improved translation performance on low resource languages hints at potential cross-lingual transfer capability for downstream tasks. In this paper, we evaluate the cross-lingual effectiveness of representations from the encoder of a massively multilingual NMT model on 5 downstream classification and sequence labeling tasks covering a diverse set of over 50 languages. We compare against a strong baseline, multilingual BERT (mBERT), in different cross-lingual transfer learning scenarios and show gains in zero-shot transfer in 4 out of these 5 tasks.

Similar Work