Adapting Pretrained Transformer To Lattices For Spoken Language Understanding | Awesome LLM Papers Add your paper to Awesome LLM Papers

Adapting Pretrained Transformer To Lattices For Spoken Language Understanding

Chao-Wei Huang, Yun-Nung Chen . 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2019 – 43 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ASRU Compositional Generalization Datasets Evaluation Fine Tuning Has Code Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation

Lattices are compact representations that encode multiple hypotheses, such as speech recognition results or different word segmentations. It is shown that encoding lattices as opposed to 1-best results generated by automatic speech recognizer (ASR) boosts the performance of spoken language understanding (SLU). Recently, pretrained language models with the transformer architecture have achieved the state-of-the-art results on natural language understanding, but their ability of encoding lattices has not been explored. Therefore, this paper aims at adapting pretrained transformers to lattice inputs in order to perform understanding tasks specifically for spoken language. Our experiments on the benchmark ATIS dataset show that fine-tuning pretrained transformers with lattice inputs yields clear improvement over fine-tuning with 1-best results. Further evaluation demonstrates the effectiveness of our methods under different acoustic conditions. Our code is available at https://github.com/MiuLab/Lattice-SLU

Similar Work