Pretrained Language Models For Text Generation: A Survey | Awesome LLM Papers Add your paper to Awesome LLM Papers

Pretrained Language Models For Text Generation: A Survey

Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen . Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021 – 103 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Fine Tuning IJCAI Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering RAG Survey Paper Variational Autoencoders Visual Question Answering

Text generation has become one of the most important yet challenging tasks in natural language processing (NLP). The resurgence of deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs). In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation. As the preliminaries, we present the general task definition and briefly describe the mainstream architectures of PLMs for text generation. As the core content, we discuss how to adapt existing PLMs to model different input data and satisfy special properties in the generated text. We further summarize several important fine-tuning strategies for text generation. Finally, we present several future directions and conclude this paper. Our survey aims to provide text generation researchers a synthesis and pointer to related research.

Similar Work