Codebertscore: Evaluating Code Generation With Pretrained Models Of Code | Awesome LLM Papers Add your paper to Awesome LLM Papers

Codebertscore: Evaluating Code Generation With Pretrained Models Of Code

Shuyan Zhou, Uri Alon, Sumit Agarwal, Graham Neubig . Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing 2023 – 61 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Evaluation Has Code Interdisciplinary Approaches Llm For Code Model Architecture

Since the rise of neural natural-language-to-code models (NL->Code) that can generate long expressions and statements rather than a single next-token, one of the major problems has been reliably evaluating their generated output. In this paper, we propose CodeBERTScore: an evaluation metric for code generation, which builds on BERTScore (Zhang et al., 2020). Instead of encoding only the generated tokens as in BERTScore, CodeBERTScore also encodes the natural language input preceding the generated code, thus modeling the consistency between the generated code and its given natural language context as well. We perform an extensive evaluation of CodeBERTScore across four programming languages. We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics. That is, generated code that receives a higher score by CodeBERTScore is more likely to be preferred by humans, as well as to function correctly when executed. We release five language-specific pretrained models to use with our publicly available code. Our language-specific models have been downloaded more than 1,000,000 times from the Huggingface Hub. Our code and data are available at https://github.com/neulab/code-bert-score

Similar Work