Diffuseq: Sequence To Sequence Text Generation With Diffusion Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Diffuseq: Sequence To Sequence Text Generation With Diffusion Models

Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong . Arxiv 2022 – 77 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Evaluation Has Code Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation RAG Variational Autoencoders

Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at https://github.com/Shark-NLP/DiffuSeq

Similar Work