Speech Emotion Recognition With Dual-sequence LSTM Architecture | Awesome LLM Papers Add your paper to Awesome LLM Papers

Speech Emotion Recognition With Dual-sequence LSTM Architecture

Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh . ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020 – 127 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing ICASSP Image Text Integration Model Architecture Visual Contextualization

Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies. In this work, we propose a new dual-level model that predicts emotions based on both MFCC features and mel-spectrograms produced from raw audio signals. Each utterance is preprocessed into MFCC features and two mel-spectrograms at different time-frequency resolutions. A standard LSTM processes the MFCC features, while a novel LSTM architecture, denoted as Dual-Sequence LSTM (DS-LSTM), processes the two mel-spectrograms simultaneously. The outputs are later averaged to produce a final classification of the utterance. Our proposed model achieves, on average, a weighted accuracy of 72.7% and an unweighted accuracy of 73.3%—a 6% improvement over current state-of-the-art unimodal models—and is comparable with multimodal models that leverage textual information as well as audio signals.

Similar Work