Generative Deep Neural Networks For Dialogue: A Short Review · Awesome LLM Papers Contribute to LLM-Bible

Generative Deep Neural Networks For Dialogue: A Short Review

Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau. Arxiv 2016 – 64 citations

[Paper]    
Model Architecture RAG Applications Survey Paper Reinforcement Learning

Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.

Similar Work