Reasonmed: A 370K Multi-agent Generated Dataset For Advancing Medical Reasoning | Awesome LLM Papers Contribute to Awesome LLM Papers

Reasonmed: A 370K Multi-agent Generated Dataset For Advancing Medical Reasoning

Yu Sun, Xingyu Qian, Weiwen Xu, Hao Zhang, Chenghao Xiao, Long Li, Yu Rong, Wenbing Huang, Qifeng Bai, Tingyang Xu . No Venue 2025

[Paper] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Agentic Datasets Evaluation Fine Tuning Prompting Training Techniques

Though reasoning-based large language models (LLMs) have excelled in mathematics and programming, their capabilities in knowledge-intensive medical question answering remain underexplored. To address this, we introduce ReasonMed, the largest medical reasoning dataset, comprising 370k high-quality examples distilled from 1.7 million initial reasoning paths generated by various LLMs. ReasonMed is constructed through a multi-agent verification and refinement process, where we design an Error Refiner to enhance the reasoning paths by identifying and correcting error-prone steps flagged by a verifier. Leveraging ReasonMed, we systematically investigate best practices for training medical reasoning models and find that combining detailed Chain-of-Thought (CoT) reasoning with concise answer summaries yields the most effective fine-tuning strategy. Based on this strategy, we train ReasonMed-7B, which sets a new benchmark for sub-10B models, outperforming the prior best by 4.17% and even exceeding LLaMA3.1-70B on PubMedQA by 4.60%.

https://huggingface.co/discussions/paper/684b8dbe3b733ba333687025

Similar Work