Multilingual E5 Text Embeddings: A Technical Report | Awesome LLM Papers Add your paper to Awesome LLM Papers

Multilingual E5 Text Embeddings: A Technical Report

Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei . No Venue 2024

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Efficiency Evaluation Fine Tuning Has Code Productivity Enhancement Training Techniques

This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a balance between the inference efficiency and embedding quality. The training procedure adheres to the English E5 model recipe, involving contrastive pre-training on 1 billion multilingual text pairs, followed by fine-tuning on a combination of labeled datasets. Additionally, we introduce a new instruction-tuned embedding model, whose performance is on par with state-of-the-art, English-only models of similar sizes. Information regarding the model release can be found at https://github.com/microsoft/unilm/tree/master/e5 .

Similar Work