Multimodal Language Analysis With Recurrent Multistage Fusion | Awesome LLM Papers Add your paper to Awesome LLM Papers

Multimodal Language Analysis With Recurrent Multistage Fusion

Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency . Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018 – 199 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing Compositional Generalization Content Enrichment Datasets EMNLP Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Variational Autoencoders Visual Contextualization

Computational modeling of human multimodal language is an emerging research area in natural language processing spanning the language, visual and acoustic modalities. Comprehending multimodal language requires modeling not only the interactions within each modality (intra-modal interactions) but more importantly the interactions between modalities (cross-modal interactions). In this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which decomposes the fusion problem into multiple stages, each of them focused on a subset of multimodal signals for specialized, effective fusion. Cross-modal interactions are modeled using this multistage fusion approach which builds upon intermediate representations of previous stages. Temporal and intra-modal interactions are modeled by integrating our proposed fusion approach with a system of recurrent neural networks. The RMFN displays state-of-the-art performance in modeling human multimodal language across three public datasets relating to multimodal sentiment analysis, emotion recognition, and speaker traits recognition. We provide visualizations to show that each stage of fusion focuses on a different subset of multimodal signals, learning increasingly discriminative multimodal representations.

Similar Work