Evaluation Of Text Generation: A Survey | Awesome LLM Papers Contribute to Awesome LLM Papers

Evaluation Of Text Generation: A Survey

Asli Celikyilmaz, Elizabeth Clark, Jianfeng Gao . Arxiv 2020 – 200 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

The paper surveys evaluation methods of natural language generation (NLG) systems that have been developed in the last few years. We group NLG evaluation methods into three categories: (1) human-centric evaluation metrics, (2) automatic metrics that require no training, and (3) machine-learned metrics. For each category, we discuss the progress that has been made and the challenges still being faced, with a focus on the evaluation of recently proposed NLG tasks and neural NLG models. We then present two examples for task-specific NLG evaluations for automatic text summarization and long text generation, and conclude the paper by proposing future research directions.

Similar Work