Knowledge-grounded Dialogue Generation With Pre-trained Language Models | Awesome LLM Papers Contribute to Awesome LLM Papers

Knowledge-grounded Dialogue Generation With Pre-trained Language Models

Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, Rui Yan . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 150 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

We study knowledge-grounded dialogue generation with pre-trained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pre-trained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.

Similar Work