On The Generalization Of SFT: A Reinforcement Learning Perspective With Reward Rectification | Awesome LLM Papers Contribute to Awesome LLM Papers

On The Generalization Of SFT: A Reinforcement Learning Perspective With Reward Rectification

Yongliang Wu, Yizhou Zhou, Zhou Ziheng, Yingzhe Peng, Xinyu Ye, Xinting Hu, Wenbo Zhu, Lu Qi, Ming-Hsuan Yang, Xu Yang . No Venue 2025

[Code] [Paper] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Fine Tuning Has Code Reinforcement Learning

We present a simple yet theoretically motivated improvement to Supervised Fine-Tuning (SFT) for the Large Language Model (LLM), addressing its limited generalization compared to reinforcement learning (RL). Through mathematical analysis, we reveal that standard SFT gradients implicitly encode a problematic reward structure that may severely restrict the generalization capabilities of model. To rectify this, we propose Dynamic Fine-Tuning (DFT), stabilizing gradient updates for each token by dynamically rescaling the objective function with the probability of this token. Remarkably, this single-line code change significantly outperforms standard SFT across multiple challenging benchmarks and base models, demonstrating greatly improved generalization. Additionally, our approach shows competitive results in offline RL settings, offering an effective yet simpler alternative. This work bridges theoretical insight and practical solutions, substantially advancing SFT performance. The code will be available at https://github.com/yongliang-wu/DFT.

https://huggingface.co/discussions/paper/6895566648b0ae5ca2710d0f

Similar Work