Learning To Classify Intents And Slot Labels Given A Handful Of Examples | Awesome LLM Papers Add your paper to Awesome LLM Papers

Learning To Classify Intents And Slot Labels Given A Handful Of Examples

Jason Krone, Yi Zhang, Mona Diab . Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI 2020 – 44 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets Dialogue & Multi Turn Evaluation Few Shot Fine Tuning Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Training Techniques

Intent classification (IC) and slot filling (SF) are core components in most goal-oriented dialogue systems. Current IC/SF models perform poorly when the number of training examples per class is small. We propose a new few-shot learning task, few-shot IC/SF, to study and improve the performance of IC and SF models on classes not seen at training time in ultra low resource scenarios. We establish a few-shot IC/SF benchmark by defining few-shot splits for three public IC/SF datasets, ATIS, TOP, and Snips. We show that two popular few-shot learning algorithms, model agnostic meta learning (MAML) and prototypical networks, outperform a fine-tuning baseline on this benchmark. Prototypical networks achieves significant gains in IC performance on the ATIS and TOP datasets, while both prototypical networks and MAML outperform the baseline with respect to SF on all three datasets. In addition, we demonstrate that joint training as well as the use of pre-trained language models, ELMo and BERT in our case, are complementary to these few-shot learning methods and yield further gains.

Similar Work