Towards Generating Long And Coherent Text With Multi-level Latent Variable Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Towards Generating Long And Coherent Text With Multi-level Latent Variable Models

Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, Lawrence Carin . Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019 – 48 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Content Enrichment Interdisciplinary Approaches Model Architecture RAG Variational Autoencoders

Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation with latent variables. In this paper, we investigate several multi-level structures to learn a VAE model to generate long, and coherent text. In particular, we use a hierarchy of stochastic layers between the encoder and decoder networks to generate more informative latent codes. We also investigate a multi-level decoder structure to learn a coherent long-term structure by generating intermediate sentence representations as high-level plan vectors. Empirical results demonstrate that a multi-level VAE model produces more coherent and less repetitive long text compared to the standard VAE models and can further mitigate the posterior-collapse issue.

Similar Work