Seq2seq2sentiment: Multimodal Sequence To Sequence Models For Sentiment Analysis | Awesome LLM Papers Add your paper to Awesome LLM Papers

Seq2seq2sentiment: Multimodal Sequence To Sequence Models For Sentiment Analysis

Hai Pham, Thomas Manzini, Paul Pu Liang, Barnabas Poczos . Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML) 2018 – 65 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Neural Machine Translation Vision Language

Multimodal machine learning is a core research area spanning the language, visual and acoustic modalities. The central challenge in multimodal learning involves learning representations that can process and relate information from multiple modalities. In this paper, we propose two methods for unsupervised learning of joint multimodal representations using sequence to sequence (Seq2Seq) methods: a \textit{Seq2Seq Modality Translation Model} and a \textit{Hierarchical Seq2Seq Modality Translation Model}. We also explore multiple different variations on the multimodal inputs and outputs of these seq2seq models. Our experiments on multimodal sentiment analysis using the CMU-MOSI dataset indicate that our methods learn informative multimodal representations that outperform the baselines and achieve improved performance on multimodal sentiment analysis, specifically in the Bimodal case where our model is able to improve F1 Score by twelve points. We also discuss future directions for multimodal Seq2Seq methods.

Similar Work