Towards Unified Dialogue System Evaluation: A Comprehensive Analysis Of Current Evaluation Protocols | Awesome LLM Papers Add your paper to Awesome LLM Papers

Towards Unified Dialogue System Evaluation: A Comprehensive Analysis Of Current Evaluation Protocols

Sarah E. Finch, Jinho D. Choi . Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue 2020 – 46 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Dialogue & Multi Turn Evaluation Productivity Enhancement Question Answering Tools

As conversational AI-based dialogue management has increasingly become a trending topic, the need for a standardized and reliable evaluation procedure grows even more pressing. The current state of affairs suggests various evaluation protocols to assess chat-oriented dialogue management systems, rendering it difficult to conduct fair comparative studies across different approaches and gain an insightful understanding of their values. To foster this research, a more robust evaluation protocol must be set in place. This paper presents a comprehensive synthesis of both automated and human evaluation methods on dialogue systems, identifying their shortcomings while accumulating evidence towards the most effective evaluation dimensions. A total of 20 papers from the last two years are surveyed to analyze three types of evaluation protocols: automated, static, and interactive. Finally, the evaluation dimensions used in these papers are compared against our expert evaluation on the system-user dialogue data collected from the Alexa Prize 2020.

Similar Work