Personalizing Dialogue Agents Via Meta-learning | Awesome LLM Papers Contribute to Awesome LLM Papers

Personalizing Dialogue Agents Via Meta-learning

Zhaojiang Lin, Andrea Madotto, Chien-Sheng Wu, Pascale Fung . Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019 – 183 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Uncategorized

Existing personalized dialogue models use human designed persona descriptions to improve dialogue consistency. Collecting such descriptions from existing dialogues is expensive and requires hand-crafted feature designs. In this paper, we propose to extend Model-Agnostic Meta-Learning (MAML)(Finn et al., 2017) to personalized dialogue learning without using any persona descriptions. Our model learns to quickly adapt to new personas by leveraging only a few dialogue samples collected from the same user, which is fundamentally different from conditioning the response on the persona descriptions. Empirical results on Persona-chat dataset (Zhang et al., 2018) indicate that our solution outperforms non-meta-learning baselines using automatic evaluation metrics, and in terms of human-evaluated fluency and consistency.

Similar Work