Multi-agent Task-oriented Dialog Policy Learning With Role-aware Reward Decomposition | Awesome LLM Papers Add your paper to Awesome LLM Papers

Multi-agent Task-oriented Dialog Policy Learning With Role-aware Reward Decomposition

Ryuichi Takanobu, Runze Liang, Minlie Huang . Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020 – 46 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Agentic Compositional Generalization Dialogue & Multi Turn Interdisciplinary Approaches Reinforcement Learning Tools User Centric Design

Many studies have applied reinforcement learning to train a dialog policy and show great promise these years. One common approach is to employ a user simulator to obtain a large number of simulated user experiences for reinforcement learning algorithms. However, modeling a realistic user simulator is challenging. A rule-based simulator requires heavy domain expertise for complex tasks, and a data-driven simulator requires considerable data and it is even unclear how to evaluate a simulator. To avoid explicitly building a user simulator beforehand, we propose Multi-Agent Dialog Policy Learning, which regards both the system and the user as the dialog agents. Two agents interact with each other and are jointly learned simultaneously. The method uses the actor-critic framework to facilitate pretraining and improve scalability. We also propose Hybrid Value Network for the role-aware reward decomposition to integrate role-specific domain knowledge of each agent in the task-oriented dialog. Results show that our method can successfully build a system policy and a user policy simultaneously, and two agents can achieve a high task success rate through conversational interaction.

Similar Work