Unsupervised Evaluation Of Interactive Dialog With Dialogpt | Awesome LLM Papers Add your paper to Awesome LLM Papers

Unsupervised Evaluation Of Interactive Dialog With Dialogpt

Shikib Mehri, Maxine Eskenazi . Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue 2020 – 85 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Fine Tuning Neural Machine Translation RAG Tools Training Techniques

It is important to define meaningful and interpretable automatic evaluation metrics for open-domain dialog research. Standard language generation metrics have been shown to be ineffective for dialog. This paper introduces the FED metric (fine-grained evaluation of dialog), an automatic evaluation metric which uses DialoGPT, without any fine-tuning or supervision. It also introduces the FED dataset which is constructed by annotating a set of human-system and human-human conversations with eighteen fine-grained dialog qualities. The FED metric (1) does not rely on a ground-truth response, (2) does not require training data and (3) measures fine-grained dialog qualities at both the turn and whole dialog levels. FED attains moderate to strong correlation with human judgement at both levels.

Similar Work