Bertscore: Evaluating Text Generation With BERT | Awesome LLM Papers Add your paper to Awesome LLM Papers

Bertscore: Evaluating Text Generation With BERT

Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, Yoav Artzi . Arxiv 2019 – 2015 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Content Enrichment Evaluation Interdisciplinary Approaches Model Architecture Neural Machine Translation RAG Security Variational Autoencoders

We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTScore is more robust to challenging examples when compared to existing metrics.

Similar Work