Thinking-free Policy Initialization Makes Distilled Reasoning Models More Effective And Efficient Reasoners | Awesome LLM Papers Add your paper to Awesome LLM Papers

Thinking-free Policy Initialization Makes Distilled Reasoning Models More Effective And Efficient Reasoners

Xin Xu, Cliveb Ai, Kai Yang, Tianhao Chen, Yang Wang, Saiyong Yang, Can Yang . No Venue 2025

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Efficiency Memory & Context Prompting Reinforcement Learning Training Techniques

Reinforcement Learning with Verifiable Reward (RLVR) effectively solves complex tasks but demands extremely long context lengths during training, leading to substantial computational costs. While multi-stage training can partially mitigate this, starting with overly short contexts often causes irreversible performance degradation, ultimately failing to reduce overall training compute significantly. In this paper, we introduce Thinking-Free Policy Initialization (TFPI), a simple yet effective adaptation to RLVR that bridges long Chain-of-Thought (CoT) distillation and standard RLVR. TFPI employs a simple ThinkFree operation, explicitly discarding the thinking content via a direct </think> append, to reduce token usage during inference. Training with ThinkFree-adapted inputs improves performance and lowers token consumption, even in the original slow-thinking mode. Extensive experiments across various benchmarks have shown that TFPI accelerates RL convergence, achieves a higher performance ceiling, and yields more token-efficient reasoning models without specialized rewards or complex training designs. With TFPI only, we train a 4B model to reach 89.0% accuracy on AIME24 and 65.5% on LiveCodeBench using less than 4K H20 hours.

Similar Work