BERT For Joint Intent Classification And Slot Filling | Awesome LLM Papers Contribute to Awesome LLM Papers

BERT For Joint Intent Classification And Slot Filling

Qian Chen, Zhu Zhuo, Wen Wang . Arxiv 2019 – 423 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Fine Tuning Model Architecture Training Techniques

Intent classification and slot filling are two essential tasks for natural language understanding. They often suffer from small-scale human-labeled training data, resulting in poor generalization capability, especially for rare words. Recently a new language representation model, BERT (Bidirectional Encoder Representations from Transformers), facilitates pre-training deep bidirectional representations on large-scale unlabeled corpora, and has created state-of-the-art models for a wide variety of natural language processing tasks after simple fine-tuning. However, there has not been much effort on exploring BERT for natural language understanding. In this work, we propose a joint intent classification and slot filling model based on BERT. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to the attention-based recurrent neural network models and slot-gated models.

Similar Work