Do Explanations Make VQA Models More Predictable To A Human? | Awesome LLM Papers Contribute to Awesome LLM Papers

Do Explanations Make VQA Models More Predictable To A Human?

Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh . Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018 – 57 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable ‘explanations’ of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model – its responses as well as failures – more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.

Similar Work