Guesswhat?! Visual Object Discovery Through Multi-modal Dialogue | Awesome LLM Papers Add your paper to Awesome LLM Papers

Guesswhat?! Visual Object Discovery Through Multi-modal Dialogue

Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville . 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017 – 423 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation CVPR Compositional Generalization Content Enrichment Datasets Dialogue & Multi Turn Image Text Integration Interactive Environments Multimodal Semantic Representation Variational Autoencoders Visual Contextualization Visual Question Answering

We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.

Similar Work