Text Generation With Exemplar-based Adaptive Decoding | Awesome LLM Papers Add your paper to Awesome LLM Papers

Text Generation With Exemplar-based Adaptive Decoding

Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, Dipanjan Das . Proceedings of the 2019 Conference of the North 2019 – 54 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Content Enrichment RAG Training Techniques Variational Autoencoders

We propose a novel conditioned text generation model. It draws inspiration from traditional template-based text generation techniques, where the source provides the content (i.e., what to say), and the template influences how to say it. Building on the successful encoder-decoder paradigm, it first encodes the content representation from the given input text; to produce the output, it retrieves exemplar text from the training data as “soft templates,” which are then used to construct an exemplar-specific decoder. We evaluate the proposed model on abstractive text summarization and data-to-text generation. Empirical results show that this model achieves strong performance and outperforms comparable baselines.

Similar Work