Deep Reinforcement Learning With Distributional Semantic Rewards For Abstractive Summarization | Awesome LLM Papers Add your paper to Awesome LLM Papers

Deep Reinforcement Learning With Distributional Semantic Rewards For Abstractive Summarization

Siyao Li, Deren Lei, Pengda Qin, William Yang Wang . Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) 2019 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets EMNLP Ethics & Fairness Evaluation Interdisciplinary Approaches Reinforcement Learning

Deep reinforcement learning (RL) has been a commonly-used strategy for the abstractive summarization task to address both the exposure bias and non-differentiable task issues. However, the conventional reward Rouge-L simply looks for exact n-grams matches between candidates and annotated references, which inevitably makes the generated sentences repetitive and incoherent. In this paper, instead of Rouge-L, we explore the practicability of utilizing the distributional semantics to measure the matching degrees. With distributional semantics, sentence-level evaluation can be obtained, and semantically-correct phrases can also be generated without being limited to the surface form of the reference sentences. Human judgments on Gigaword and CNN/Daily Mail datasets show that our proposed distributional semantics reward (DSR) has distinct superiority in capturing the lexical and compositional diversity of natural language.

Similar Work