More Thought, Less Accuracy? On The Dual Nature Of Reasoning In Vision-language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

More Thought, Less Accuracy? On The Dual Nature Of Reasoning In Vision-language Models

Xinyu Tian, Shu Zou, Zhaoyuan Yang, Mengqi He, Fabian Waschkowski, Lukas Wesemann, Peter Tu, Jing Zhang . No Venue 2025

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Efficiency Has Code Image Text Integration Interdisciplinary Approaches Llm For Code Multimodal Semantic Representation Reinforcement Learning Visual Contextualization

Reasoning has emerged as a pivotal capability in Large Language Models (LLMs). Through Reinforcement Learning (RL), typically Group Relative Policy Optimization (GRPO), these models are able to solve complex tasks such as mathematics and code generation. Building on these advances, recent research has sought to extend reasoning to Vision-Language Models (VLMs), yielding promising results across diverse visual tasks. Despite this progress, our study uncovers the dual nature of multimodal reasoning: while it substantially enhances logical inference and facilitates performance on challenging problems, it may gradually impair perceptual grounding, leading to recognition failures on otherwise basic visual questions. Through further analysis, we attribute this phenomenon to visual forgetting, wherein prolonged reasoning causes the model to increasingly disregard visual input. To address this, we propose Vision-Anchored Policy Optimization (VAPO), a simple yet effective method that explicitly steers the reasoning process toward visually grounded trajectories. Our result model, VAPO-Thinker-7B, significantly strengthens the model’s reliance on visual information and achieves new state-of-the-art results on a wide range of established benchmarks. Project page: https://xytian1008.github.io/VAPO/

Similar Work