Thinking In Space: How Multimodal Large Language Models See, Remember, And Recall Spaces | Awesome LLM Papers Add your paper to Awesome LLM Papers

Thinking In Space: How Multimodal Large Language Models See, Remember, And Recall Spaces

Jihan Yang, Shusheng Yang, Anjali W. Gupta, Rilyn Han, Li Fei-Fei, Saining Xie . No Venue 2024

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets Evaluation Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Prompting Visual Contextualization

Humans possess the visual-spatial intelligence to remember spaces from sequential visual observations. However, can Multimodal Large Language Models (MLLMs) trained on million-scale video datasets also ``think in space’’ from videos? We present a novel video-based visual-spatial intelligence benchmark (VSI-Bench) of over 5,000 question-answer pairs, and find that MLLMs exhibit competitive - though subhuman - visual-spatial intelligence. We probe models to express how they think in space both linguistically and visually and find that while spatial reasoning capabilities remain the primary bottleneck for MLLMs to reach higher benchmark performance, local world models and spatial awareness do emerge within these models. Notably, prevailing linguistic reasoning techniques (e.g., chain-of-thought, self-consistency, tree-of-thoughts) fail to improve performance, whereas explicitly generating cognitive maps during question-answering enhances MLLMs’ spatial distance ability.

Similar Work