Mmlatch: Bottom-up Top-down Fusion For Multimodal Sentiment Analysis | Awesome LLM Papers Add your paper to Awesome LLM Papers

Mmlatch: Bottom-up Top-down Fusion For Multimodal Sentiment Analysis

Georgios Paraskevopoulos, Efthymios Georgiou, Alexandros Potamianos . ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022 – 43 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ICASSP Model Architecture

Current deep learning approaches for multimodal fusion rely on bottom-up fusion of high and mid-level latent modality representations (late/mid fusion) or low level sensory inputs (early fusion). Models of human perception highlight the importance of top-down fusion, where high-level representations affect the way sensory inputs are perceived, i.e. cognition affects perception. These top-down interactions are not captured in current deep learning models. In this work we propose a neural architecture that captures top-down cross-modal interactions, using a feedback mechanism in the forward pass during network training. The proposed mechanism extracts high-level representations for each modality and uses these representations to mask the sensory inputs, allowing the model to perform top-down feature masking. We apply the proposed model for multimodal sentiment recognition on CMU-MOSEI. Our method shows consistent improvements over the well established MulT and over our strong late fusion baseline, achieving state-of-the-art results.

Similar Work