Question Relevance In VQA: Identifying Non-visual And False-premise Questions | Awesome LLM Papers Add your paper to Awesome LLM Papers

Question Relevance In VQA: Identifying Non-visual And False-premise Questions

Arijit Ray, Gordon Christie, Mohit Bansal, Dhruv Batra, Devi Parikh . Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016 – 44 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization EMNLP Interdisciplinary Approaches Model Architecture Question Answering Visual Question Answering

Visual Question Answering (VQA) is the task of answering natural-language questions about images. We introduce the novel problem of determining the relevance of questions to images in VQA. Current VQA models do not reason about whether a question is even related to the given image (e.g. What is the capital of Argentina?) or if it requires information from external resources to answer correctly. This can break the continuity of a dialogue in human-machine interaction. Our approaches for determining relevance are composed of two stages. Given an image and a question, (1) we first determine whether the question is visual or not, (2) if visual, we determine whether the question is relevant to the given image or not. Our approaches, based on LSTM-RNNs, VQA model uncertainty, and caption-question similarity, are able to outperform strong baselines on both relevance tasks. We also present human studies showing that VQA models augmented with such question relevance reasoning are perceived as more intelligent, reasonable, and human-like.

Similar Work