Entity-level Factual Consistency Of Abstractive Text Summarization | Awesome LLM Papers Add your paper to Awesome LLM Papers

Entity-level Factual Consistency Of Abstractive Text Summarization

Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira Dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, Bing Xiang . Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021 – 95 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Datasets EACL Evaluation Interdisciplinary Approaches Training Techniques

A key challenge for abstractive summarization is ensuring factual consistency of the generated summary with respect to the original document. For example, state-of-the-art models trained on existing datasets exhibit entity hallucination, generating names of entities that are not present in the source document. We propose a set of new metrics to quantify the entity-level factual consistency of generated summaries and we show that the entity hallucination problem can be alleviated by simply filtering the training data. In addition, we propose a summary-worthy entity classification task to the training process as well as a joint entity and summary generation approach, which yield further improvements in entity level metrics.

Similar Work