Enabling Scalable Oversight Via Self-evolving Critic | Awesome LLM Papers Contribute to Awesome LLM Papers

Enabling Scalable Oversight Via Self-evolving Critic

Zhengyang Tang, Ziniu Li, Zhenyang Xiao, Tian Ding, Ruoyu Sun, Benyou Wang, Dayiheng Liu, Fei Huang, Tianyu Liu, Bowen Yu, Junyang Lin . No Venue 2025

[Paper] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Tools Training Techniques

Despite their remarkable performance, the development of Large Language Models (LLMs) faces a critical challenge in scalable oversight: providing effective feedback for tasks where human evaluation is difficult or where LLMs outperform humans. While there is growing interest in using LLMs for critique, current approaches still rely on human annotations or more powerful models, leaving the issue of enhancing critique capabilities without external supervision unresolved. We introduce SCRIT (Self-evolving CRITic), a framework that enables genuine self-evolution of critique abilities. Technically, SCRIT self-improves by training on synthetic data, generated by a contrastive-based self-critic that uses reference solutions for step-by-step critique, and a self-validation mechanism that ensures critique quality through correction outcomes. Implemented with Qwen2.5-72B-Instruct, one of the most powerful LLMs, SCRIT achieves up to a 10.3% improvement on critique-correction and error identification benchmarks. Our analysis reveals that SCRIT’s performance scales positively with data and model size, outperforms alternative approaches, and benefits critically from its self-validation component.

https://huggingface.co/discussions/paper/67847bcc43f26c67cc29d6cd

Similar Work