UNION: An Unreferenced Metric For Evaluating Open-ended Story Generation | Awesome LLM Papers Add your paper to Awesome LLM Papers

UNION: An Unreferenced Metric For Evaluating Open-ended Story Generation

Jian Guan, Minlie Huang . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 47 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets EMNLP Evaluation Interdisciplinary Approaches Model Architecture

Despite the success of existing referenced metrics (e.g., BLEU and MoverScore), they correlate poorly with human judgments for open-ended text generation including story or dialog generation because of the notorious one-to-many issue: there are many plausible outputs for the same input, which may differ substantially in literal or semantics from the limited number of given references. To alleviate this issue, we propose UNION, a learnable unreferenced metric for evaluating open-ended story generation, which measures the quality of a generated story without any reference. Built on top of BERT, UNION is trained to distinguish human-written stories from negative samples and recover the perturbation in negative stories. We propose an approach of constructing negative samples by mimicking the errors commonly observed in existing NLG models, including repeated plots, conflicting logic, and long-range incoherence. Experiments on two story datasets demonstrate that UNION is a reliable measure for evaluating the quality of generated stories, which correlates better with human judgments and is more generalizable than existing state-of-the-art metrics.

Similar Work