Supervised Multimodal Bitransformers For Classifying Images And Text | Awesome LLM Papers Contribute to Awesome LLM Papers

Supervised Multimodal Bitransformers For Classifying Images And Text

Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Ethan Perez, Davide Testuggine . Arxiv 2019 – 163 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

Self-supervised bidirectional transformer models such as BERT have led to dramatic improvements in a wide variety of textual classification tasks. The modern digital world is increasingly multimodal, however, and textual information is often accompanied by other modalities such as images. We introduce a supervised multimodal bitransformer model that fuses information from text and image encoders, and obtain state-of-the-art performance on various multimodal classification benchmark tasks, outperforming strong baselines, including on hard test sets specifically designed to measure multimodal performance.

Similar Work