Dialogpt: Large-scale Generative Pre-training For Conversational Response Generation | Awesome LLM Papers Contribute to Awesome LLM Papers

Dialogpt: Large-scale Generative Pre-training For Conversational Response Generation

Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan . Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 2020 – 1004 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Dialogue & Multi Turn Evaluation Model Architecture Training Techniques

We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.

Similar Work