Learning Cross-lingual Sentence Representations Via A Multi-task Dual-encoder Model | Awesome LLM Papers Add your paper to Awesome LLM Papers

Learning Cross-lingual Sentence Representations Via A Multi-task Dual-encoder Model

Muthuraman Chidambaram, Yinfei Yang, Daniel Cer, Steve Yuan, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil . Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019) 2019 – 116 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Evaluation Few Shot Training Techniques

A significant roadblock in multilingual neural language modeling is the lack of labeled non-English data. One potential method for overcoming this issue is learning cross-lingual text representations that can be used to transfer the performance from training on English tasks to non-English tasks, despite little to no task-specific non-English data. In this paper, we explore a natural setup for learning cross-lingual sentence representations: the dual-encoder. We provide a comprehensive evaluation of our cross-lingual representations on a number of monolingual, cross-lingual, and zero-shot/few-shot learning tasks, and also give an analysis of different learned cross-lingual embedding spaces.

Similar Work