Building And Evaluating Open-domain Dialogue Corpora With Clarifying Questions | Awesome LLM Papers Add your paper to Awesome LLM Papers

Building And Evaluating Open-domain Dialogue Corpora With Clarifying Questions

Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeffrey Dalton, Mikhail Burtsev . Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021 – 52 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Dialogue & Multi Turn EMNLP Evaluation Interdisciplinary Approaches

Enabling open-domain dialogue systems to ask clarifying questions when appropriate is an important direction for improving the quality of the system response. Namely, for cases when a user request is not specific enough for a conversation system to provide an answer right away, it is desirable to ask a clarifying question to increase the chances of retrieving a satisfying answer. To address the problem of ‘asking clarifying questions in open-domain dialogues’: (1) we collect and release a new dataset focused on open-domain single- and multi-turn conversations, (2) we benchmark several state-of-the-art neural baselines, and (3) we propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues. These contributions are suitable as a foundation for further research.

Similar Work