Mm-eureka: Exploring Visual Aha Moment With Rule-based Large-scale Reinforcement Learning | Awesome LLM Papers Contribute to Awesome LLM Papers

Mm-eureka: Exploring Visual Aha Moment With Rule-based Large-scale Reinforcement Learning

Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, Ping Luo, Yu Qiao, Qiaosheng Zhang, Wenqi Shao . No Venue 2025

[Code] [Paper] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Fine Tuning Has Code Reinforcement Learning

We present MM-Eureka, a multimodal reasoning model that successfully extends large-scale rule-based reinforcement learning (RL) to multimodal reasoning. While rule-based RL has shown remarkable success in improving LLMs’ reasoning abilities in text domains, its application to multimodal settings has remained challenging. Our work reproduces key characteristics of text-based RL systems like DeepSeek-R1 in the multimodal space, including steady increases in accuracy reward and response length, and the emergence of reflection behaviors. We demonstrate that both instruction-tuned and pre-trained models can develop strong multimodal reasoning capabilities through rule-based RL without supervised fine-tuning, showing superior data efficiency compared to alternative approaches. We open-source our complete pipeline to foster further research in this area. We release all our codes, models, data, etc. at https://github.com/ModalMinds/MM-EUREKA

https://huggingface.co/discussions/paper/67cf9cd137bc7273882147e2

Similar Work