Promptriever: Instruction-trained Retrievers Can Be Prompted Like Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Promptriever: Instruction-trained Retrievers Can Be Prompted Like Language Models

Orion Weller, Benjamin van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, Jack Hessel . No Venue 2024

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Interdisciplinary Approaches Multimodal Semantic Representation Prompting Question Answering RAG Security Training Techniques Visual Contextualization

Instruction-tuned language models (LM) are able to respond to imperative commands, providing a more natural user interface compared to their base counterparts. In this work, we present Promptriever, the first retrieval model able to be prompted like an LM. To train Promptriever, we curate and release a new instance-level instruction training set from MS MARCO, spanning nearly 500k instances. Promptriever not only achieves strong performance on standard retrieval tasks, but also follows instructions. We observe: (1) large gains (reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR / +3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR), and (3) the ability to perform hyperparameter search via prompting to reliably improve retrieval performance (+1.4 average increase on BEIR). Promptriever demonstrates that retrieval models can be controlled with prompts on a per-query basis, setting the stage for future work aligning LM prompting techniques with information retrieval.

Similar Work