Designing Precise And Robust Dialogue Response Evaluators | Awesome LLM Papers Add your paper to Awesome LLM Papers

Designing Precise And Robust Dialogue Response Evaluators

Tianyu Zhao, Divesh Lala, Tatsuya Kawahara . Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020 – 40 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Evaluation Has Code Interdisciplinary Approaches Multimodal Semantic Representation Training Techniques

Automatic dialogue response evaluator has been proposed as an alternative to automated metrics and human evaluation. However, existing automatic evaluators achieve only moderate correlation with human judgement and they are not robust. In this work, we propose to build a reference-free evaluator and exploit the power of semi-supervised training and pretrained (masked) language models. Experimental results demonstrate that the proposed evaluator achieves a strong correlation (> 0.6) with human judgement and generalizes robustly to diverse responses and corpora. We open-source the code and data in https://github.com/ZHAOTING/dialog-processing.

Similar Work