Utterance-level End-to-end Language Identification Using Attention-based CNN-BLSTM | Awesome LLM Papers Add your paper to Awesome LLM Papers

Utterance-level End-to-end Language Identification Using Attention-based CNN-BLSTM

Weicheng Cai, Danwei Cai, Shen Huang, Ming Li . ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019 – 49 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ICASSP Tools

In this paper, we present an end-to-end language identification framework, the attention-based Convolutional Neural Network-Bidirectional Long-short Term Memory (CNN-BLSTM). The model is performed on the utterance level, which means the utterance-level decision can be directly obtained from the output of the neural network. To handle speech utterances with entire arbitrary and potentially long duration, we combine CNN-BLSTM model with a self-attentive pooling layer together. The front-end CNN-BLSTM module plays a role as local pattern extractor for the variable-length inputs, and the following self-attentive pooling layer is built on top to get the fixed-dimensional utterance-level representation. We conducted experiments on NIST LRE07 closed-set task, and the results reveal that the proposed attention-based CNN-BLSTM model achieves comparable error reduction with other state-of-the-art utterance-level neural network approaches for all 3 seconds, 10 seconds, 30 seconds duration tasks.

Similar Work