Learning Context-aware Task Reasoning For Efficient Meta-reinforcement Learning | Awesome LLM Papers Add your paper to Awesome LLM Papers

Learning Context-aware Task Reasoning For Efficient Meta-reinforcement Learning

Haozhe Wang, Jiale Zhou, Xuming He . Proceedings of the ACM on Programming Languages 2020 – 60 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Model Architecture

Despite recent success of deep network-based Reinforcement Learning (RL), it remains elusive to achieve human-level efficiency in learning novel tasks. While previous efforts attempt to address this challenge using meta-learning strategies, they typically suffer from sampling inefficiency with on-policy RL algorithms or meta-overfitting with off-policy learning. In this work, we propose a novel meta-RL strategy to address those limitations. In particular, we decompose the meta-RL problem into three sub-tasks, task-exploration, task-inference and task-fulfillment, instantiated with two deep network agents and a task encoder. During meta-training, our method learns a task-conditioned actor network for task-fulfillment, an explorer network with a self-supervised reward shaping that encourages task-informative experiences in task-exploration, and a context-aware graph-based task encoder for task inference. We validate our approach with extensive experiments on several public benchmarks and the results show that our algorithm effectively performs exploration for task inference, improves sample efficiency during both training and testing, and mitigates the meta-overfitting problem.

Similar Work