Summary Level Training Of Sentence Rewriting For Abstractive Summarization | Awesome LLM Papers Contribute to Awesome LLM Papers

Summary Level Training Of Sentence Rewriting For Abstractive Summarization

Sanghwan Bae, Taeuk Kim, Jihoon Kim, Sang-Goo Lee . Proceedings of the 2nd Workshop on New Frontiers in Summarization 2019 – 63 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.

Similar Work