Multilingual Universal Sentence Encoder For Semantic Retrieval | Awesome LLM Papers Contribute to Awesome LLM Papers

Multilingual Universal Sentence Encoder For Semantic Retrieval

Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil . Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 2020 – 379 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Fine Tuning Model Architecture

We introduce two pre-trained retrieval focused multilingual sentence encoding models, respectively based on the Transformer and CNN model architectures. The models embed text from 16 languages into a single semantic space using a multi-task trained dual-encoder that learns tied representations using translation based bridge tasks (Chidambaram al., 2018). The models provide performance that is competitive with the state-of-the-art on: semantic retrieval (SR), translation pair bitext retrieval (BR) and retrieval question answering (ReQA). On English transfer learning tasks, our sentence-level embeddings approach, and in some cases exceed, the performance of monolingual, English only, sentence embedding models. Our models are made available for download on TensorFlow Hub.

Similar Work