Efficient Multimodal Fusion Via Interactive Prompting | Awesome LLM Papers Add your paper to Awesome LLM Papers

Efficient Multimodal Fusion Via Interactive Prompting

Yaowei Li, Ruijie Quan, Linchao Zhu, Yi Yang . 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023 – 44 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation CVPR Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Prompting Question Answering Tools Training Techniques Visual Contextualization Visual Question Answering

Large-scale pre-training has brought unimodal fields such as computer vision and natural language processing to a new era. Following this trend, the size of multi-modal learning models constantly increases, leading to an urgent need to reduce the massive computational cost of finetuning these models for downstream tasks. In this paper, we propose an efficient and flexible multimodal fusion method, namely PMF, tailored for fusing unimodally pre-trained transformers. Specifically, we first present a modular multimodal fusion framework that exhibits high flexibility and facilitates mutual interactions among different modalities. In addition, we disentangle vanilla prompts into three types in order to learn different optimizing objectives for multimodal learning. It is also worth noting that we propose to add prompt vectors only on the deep layers of the unimodal transformers, thus significantly reducing the training memory usage. Experiment results show that our proposed method achieves comparable performance to several other multimodal finetuning methods with less than 3% trainable parameters and up to 66% saving of training memory usage.

Similar Work