From Zero To Hero: On The Limitations Of Zero-shot Cross-lingual Transfer With Multilingual Transformers | Awesome LLM Papers Add your paper to Awesome LLM Papers

From Zero To Hero: On The Limitations Of Zero-shot Cross-lingual Transfer With Multilingual Transformers

Anne Lauscher, Vinit Ravishankar, Ivan VuliΔ‡, Goran GlavaΕ‘ . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 50 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Few Shot Fine Tuning Interdisciplinary Approaches Question Answering

Massively multilingual transformers pretrained with language modeling objectives (e.g., mBERT, XLM-R) have become a de facto default transfer paradigm for zero-shot cross-lingual transfer in NLP, offering unmatched transfer performance. Current downstream evaluations, however, verify their efficacy predominantly in transfer settings involving languages with sufficient amounts of pretraining data, and with lexically and typologically close languages. In this work, we analyze their limitations and show that cross-lingual transfer via massively multilingual transformers, much like transfer via cross-lingual word embeddings, is substantially less effective in resource-lean scenarios and for distant languages. Our experiments, encompassing three lower-level tasks (POS tagging, dependency parsing, NER), as well as two high-level semantic tasks (NLI, QA), empirically correlate transfer performance with linguistic similarity between the source and target languages, but also with the size of pretraining corpora of target languages. We also demonstrate a surprising effectiveness of inexpensive few-shot transfer (i.e., fine-tuning on a few target-language instances after fine-tuning in the source) across the board. This suggests that additional research efforts should be invested to reach beyond the limiting zero-shot conditions.

Similar Work