A Transformer-based Joint-encoding For Emotion Recognition And Sentiment Analysis | Awesome LLM Papers Add your paper to Awesome LLM Papers

A Transformer-based Joint-encoding For Emotion Recognition And Sentiment Analysis

Jean-Benoit Delbrouck, NoΓ© Tits, Mathilde Brousmiche, StΓ©phane Dupont . Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML) 2020 – 83 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing Datasets Has Code Image Text Integration Model Architecture Visual Contextualization

Understanding expressed sentiment and emotions are two crucial factors in human multimodal language. This paper describes a Transformer-based joint-encoding (TBJE) for the task of Emotion Recognition and Sentiment Analysis. In addition to use the Transformer architecture, our approach relies on a modular co-attention and a glimpse layer to jointly encode one or more modalities. The proposed solution has also been submitted to the ACL20: Second Grand-Challenge on Multimodal Language to be evaluated on the CMU-MOSEI dataset. The code to replicate the presented experiments is open-source: https://github.com/jbdel/MOSEI_UMONS.

Similar Work