Mistral 7B | Awesome LLM Papers Contribute to Awesome LLM Papers

Mistral 7B

Albert Q. Jiang , Alexandre Sablayrolles , Arthur Mensch , Chris Bamford , Devendra Singh Chaplot , Diego de Las Casas , Florian Bressand , Gianna Lengyel , Guillaume Lample , Lucile Saulnier , Lélio Renard Lavaud , Marie-Anne Lachaux , Pierre Stock , Teven Le Scao , Thibaut Lavril , Thomas Wang , Timothée Lacroix , William El Sayed . Arxiv 2023 – 162 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Instruction Following Llm For Code Memory & Context

We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B – Instruct, that surpasses the Llama 2 13B – Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.

Similar Work