Visual Question Answering With Memory-augmented Networks | Awesome LLM Papers Contribute to Awesome LLM Papers

Visual Question Answering With Memory-augmented Networks

Chao Ma, Chunhua Shen, Anthony Dick, Qi Wu, Peng Wang, Anton van Den Hengel, Ian Reid . 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018 – 107 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Datasets Evaluation Training Techniques

In this paper, we exploit a memory-augmented neural network to predict accurate answers to visual questions, even when those answers occur rarely in the training set. The memory network incorporates both internal and external memory blocks and selectively pays attention to each training exemplar. We show that memory-augmented neural networks are able to maintain a relatively long-term memory of scarce training exemplars, which is important for visual question answering due to the heavy-tailed distribution of answers in a general VQA setting. Experimental results on two large-scale benchmark datasets show the favorable performance of the proposed algorithm with a comparison to state of the art.

Similar Work