Openrlhf: An Easy-to-use, Scalable And High-performance RLHF Framework | Awesome LLM Papers Get the Daily LLM Digest - Subscribe

Openrlhf: An Easy-to-use, Scalable And High-performance RLHF Framework

Jian Hu, Xibin Wu, Weixun Wang, Xianyu, Dehao Zhang, Yu Cao . No Venue 2024

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Efficiency Fine Tuning Has Code Interdisciplinary Approaches Multimodal Semantic Representation Productivity Enhancement Prompting Reinforcement Learning Tools Training Techniques

Large Language Models (LLMs) fine-tuned via Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) significantly improve the alignment of human-AI values and further raise the upper bound of AI capabilities, particularly in reasoning-intensive, long-context Chain-of-Thought (long-CoT) tasks. However, existing RLHF (or RLVR) frameworks commonly face challenges such as inference bottlenecks and complexity barriers, restricting their accessibility for newcomers. To bridge this gap, we introduce OpenRLHF, a user-friendly, scalable, and easy-to-learn open-source RLHF framework built upon Ray, vLLM, DeepSpeed, and HuggingFace Transformers, featuring a simplified design, clear code structure, and comprehensive documentation to facilitate entry for researchers and practitioners. Experimental results show that OpenRLHF achieves superior training efficiency with speedups ranging from 1.22x to 1.68x across different model sizes compared to state-of-the-art frameworks, while requiring significantly fewer lines of code for implementation. OpenRLHF is publicly available at https://github.com/OpenRLHF/OpenRLHF, and has already been adopted by leading institutions to accelerate RLHF research and learning.

Similar Work