BLEURT: Learning Robust Metrics For Text Generation · Awesome LLM Papers Contribute to LLM-Bible

BLEURT: Learning Robust Metrics For Text Generation

Thibault Sellam, Dipanjan Das, Ankur P. Parikh. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020 – 241 citations

[Paper]    
Language Modeling Model Architecture Ethics and Bias WMT Reinforcement Learning Pre-Training BERT Training Techniques Evaluation

Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.

Similar Work