Dialoglue: A Natural Language Understanding Benchmark For Task-oriented Dialogue | Awesome LLM Papers Contribute to Awesome LLM Papers

Dialoglue: A Natural Language Understanding Benchmark For Task-oriented Dialogue

Shikib Mehri, Mihail Eric, Dilek Hakkani-Tur . Arxiv 2020 – 104 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

A long-standing goal of task-oriented dialogue research is the ability to flexibly adapt dialogue models to new domains. To progress research in this direction, we introduce DialoGLUE (Dialogue Language Understanding Evaluation), a public benchmark consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks, designed to encourage dialogue research in representation-based transfer, domain adaptation, and sample-efficient task learning. We release several strong baseline models, demonstrating performance improvements over a vanilla BERT architecture and state-of-the-art results on 5 out of 7 tasks, by pre-training on a large open-domain dialogue corpus and task-adaptive self-supervised training. Through the DialoGLUE benchmark, the baseline methods, and our evaluation scripts, we hope to facilitate progress towards the goal of developing more general task-oriented dialogue models.

Similar Work