Decoupling The Role Of Data, Attention, And Losses In Multimodal Transformers | Awesome LLM Papers Add your paper to Awesome LLM Papers

Decoupling The Role Of Data, Attention, And Losses In Multimodal Transformers

Lisa Anne Hendricks, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, Aida Nematzadeh . Transactions of the Association for Computational Linguistics 2021 – 62 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Datasets Image Text Integration Interdisciplinary Approaches Model Architecture Neural Machine Translation Question Answering TACL Training Techniques Visual Contextualization

Recently multimodal transformer models have gained popularity because their performance on language and vision tasks suggest they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors which can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers

Similar Work