PLATO: Pre-trained Dialogue Generation Model With Discrete Latent Variable · Awesome LLM Papers Contribute to LLM-Bible

PLATO: Pre-trained Dialogue Generation Model With Discrete Latent Variable

Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2019 – 84 citations

[Paper]    
Model Architecture Attention Mechanism Transformer RAG Tools Reinforcement Learning Pre-Training Training Techniques

Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.

Similar Work