Image Captioning For Effective Use Of Language Models In Knowledge-based Visual Question Answering | Awesome LLM Papers Add your paper to Awesome LLM Papers

Image Captioning For Effective Use Of Language Models In Knowledge-based Visual Question Answering

Ander Salaberria, Gorka Azkune, Oier Lopez de Lacalle, Aitor Soroa, Eneko Agirre . Expert Systems with Applications 2022 – 48 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Question Answering Visual Contextualization Visual Question Answering

Integrating outside knowledge for reasoning in visio-linguistic tasks such as visual question answering (VQA) is an open problem. Given that pretrained language models have been shown to include world knowledge, we propose to use a unimodal (text-only) train and inference procedure based on automatic off-the-shelf captioning of images and pretrained language models. Our results on a visual question answering task which requires external knowledge (OK-VQA) show that our text-only model outperforms pretrained multimodal (image-text) models of comparable number of parameters. In contrast, our model is less effective in a standard VQA task (VQA 2.0) confirming that our text-only method is specially effective for tasks requiring external knowledge. In addition, we show that increasing the language model’s size improves notably its performance, yielding results comparable to the state-of-the-art with our largest model, significantly outperforming current multimodal systems, even though augmented with external knowledge. Our qualitative analysis on OK-VQA reveals that automatic captions often fail to capture relevant information in the images, which seems to be balanced by the better inference ability of the text-only language models. Our work opens up possibilities to further improve inference in visio-linguistic tasks

Similar Work