A Knowledge-grounded Neural Conversation Model | Awesome LLM Papers Contribute to Awesome LLM Papers

A Knowledge-grounded Neural Conversation Model

Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-Tau Yih, Michel Galley . Proceedings of the AAAI Conference on Artificial Intelligence 2018 – 440 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Applications

Neural network models are capable of generating extremely natural sounding conversational interactions. Nevertheless, these models have yet to demonstrate that they can incorporate content in the form of factual information or entity-grounded opinion that would enable them to serve in more task-oriented conversational applications. This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses without slot filling. We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting. Our approach yields significant improvements over a competitive Seq2Seq baseline. Human judges found that our outputs are significantly more informative.

Similar Work