Paraphrasing With Large Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Paraphrasing With Large Language Models

Sam Witteveen, Martin Andrews . Proceedings of the 3rd Workshop on Neural Generation and Translation 2019 – 80 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing Compositional Generalization Content Enrichment Fine Tuning Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Question Answering RAG Variational Autoencoders

Recently, large language models such as GPT-2 have shown themselves to be extremely adept at text generation and have also been able to achieve high-quality results in many downstream NLP tasks such as text classification, sentiment analysis and question answering with the aid of fine-tuning. We present a useful technique for using a large language model to perform the task of paraphrasing on a variety of texts and subjects. Our approach is demonstrated to be capable of generating paraphrases not only at a sentence level but also for longer spans of text such as paragraphs without needing to break the text into smaller chunks.

Similar Work