Dynamic Fusion With Intra- And Inter- Modality Attention Flow For Visual Question Answering | Awesome LLM Papers Contribute to Awesome LLM Papers

Dynamic Fusion With Intra- And Inter- Modality Attention Flow For Visual Question Answering

Gao Peng, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven Hoi, Xiaogang Wang, Hongsheng Li . 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019 – 360 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Datasets

Learning effective fusion of multi-modality features is at the heart of visual question answering. We propose a novel method of dynamically fusing multi-modal features with intra- and inter-modality information flow, which alternatively pass dynamic information between and across the visual and language modalities. It can robustly capture the high-level interactions between language and vision domains, thus significantly improves the performance of visual question answering. We also show that the proposed dynamic intra-modality attention flow conditioned on the other modality can dynamically modulate the intra-modality attention of the target modality, which is vital for multimodality feature fusion. Experimental evaluations on the VQA 2.0 dataset show that the proposed method achieves state-of-the-art VQA performance. Extensive ablation studies are carried out for the comprehensive analysis of the proposed method.

Similar Work