A Bi-model Based RNN Semantic Frame Parsing Model For Intent Detection And Slot Filling | Awesome LLM Papers Add your paper to Awesome LLM Papers

A Bi-model Based RNN Semantic Frame Parsing Model For Intent Detection And Slot Filling

Yu Wang, Yilin Shen, Hongxia Jin . Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) 2018 – 202 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Model Architecture

Intent detection and slot filling are two main tasks for building a spoken language understanding(SLU) system. Multiple deep learning based models have demonstrated good results on these tasks . The most effective algorithms are based on the structures of sequence to sequence models (or “encoder-decoder” models), and generate the intents and semantic tags either using separate models or a joint model. Most of the previous studies, however, either treat the intent detection and slot filling as two separate parallel tasks, or use a sequence to sequence model to generate both semantic tags and intent. Most of these approaches use one (joint) NN based model (including encoder-decoder structure) to model two tasks, hence may not fully take advantage of the cross-impact between them. In this paper, new Bi-model based RNN semantic frame parsing network structures are designed to perform the intent detection and slot filling tasks jointly, by considering their cross-impact to each other using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a decoder achieves state-of-the-art result on the benchmark ATIS data, with about 0.5(%) intent accuracy improvement and 0.9 (%) slot filling improvement.

Similar Work