Survey On Large Language Model-enhanced Reinforcement Learning: Concept, Taxonomy, And Methods | Awesome LLM Papers Add your paper to Awesome LLM Papers

Survey On Large Language Model-enhanced Reinforcement Learning: Concept, Taxonomy, And Methods

Yuji Cao, Huan Zhao, Yuheng Cheng, Ting Shu, Yue Chen, Guolong Liu, Gaoqi Liang, Junhua Zhao, Jinyue Yan, Yun Li . IEEE Transactions on Neural Networks and Learning Systems 2025 – 43 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Agentic Applications Compositional Generalization Efficiency Interdisciplinary Approaches Multimodal Semantic Representation Productivity Enhancement Reinforcement Learning Survey Paper Tools Variational Autoencoders

With extensive pre-trained knowledge and high-level general capabilities, large language models (LLMs) emerge as a promising avenue to augment reinforcement learning (RL) in aspects such as multi-task learning, sample efficiency, and high-level task planning. In this survey, we provide a comprehensive review of the existing literature in LLM-enhanced RL and summarize its characteristics compared to conventional RL methods, aiming to clarify the research scope and directions for future studies. Utilizing the classical agent-environment interaction paradigm, we propose a structured taxonomy to systematically categorize LLMs’ functionalities in RL, including four roles: information processor, reward designer, decision-maker, and generator. For each role, we summarize the methodologies, analyze the specific RL challenges that are mitigated, and provide insights into future directions. Lastly, a comparative analysis of each role, potential applications, prospective opportunities, and challenges of the LLM-enhanced RL are discussed. By proposing this taxonomy, we aim to provide a framework for researchers to effectively leverage LLMs in the RL field, potentially accelerating RL applications in complex applications such as robotics, autonomous driving, and energy systems.

Similar Work