An Empirical Survey On Long Document Summarization: Datasets, Models And Metrics | Awesome LLM Papers Add your paper to Awesome LLM Papers

An Empirical Survey On Long Document Summarization: Datasets, Models And Metrics

Huan Yee Koh, Jiaxin Ju, Ming Liu, Shirui Pan . ACM Computing Surveys 2022 – 69 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Survey Paper

Long documents such as academic articles and business reports have been the standard format to detail out important issues and complicated subjects that require extra attention. An automatic summarization system that can effectively condense long documents into short and concise texts to encapsulate the most important information would thus be significant in aiding the reader’s comprehension. Recently, with the advent of neural architectures, significant research efforts have been made to advance automatic text summarization systems, and numerous studies on the challenges of extending these systems to the long document domain have emerged. In this survey, we provide a comprehensive overview of the research on long document summarization and a systematic evaluation across the three principal components of its research setting: benchmark datasets, summarization models, and evaluation metrics. For each component, we organize the literature within the context of long document summarization and conduct an empirical analysis to broaden the perspective on current research progress. The empirical analysis includes a study on the intrinsic characteristics of benchmark datasets, a multi-dimensional analysis of summarization models, and a review of the summarization evaluation metrics. Based on the overall findings, we conclude by proposing possible directions for future exploration in this rapidly growing field.

Similar Work