Ggbench: A Geometric Generative Reasoning Benchmark For Unified Multimodal Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Ggbench: A Geometric Generative Reasoning Benchmark For Unified Multimodal Models

Jingxuan Wei, Caijun Jia, Xi Bai, Xinglong Xu, Siyuan Li, Linzhuang Sun, Bihui Yu, Conghui He, Lijun Wu, Cheng Tan . No Venue 2025

[Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Evaluation Has Code Image Text Integration Interdisciplinary Approaches Tools Visual Contextualization

The advent of Unified Multimodal Models (UMMs) signals a paradigm shift in artificial intelligence, moving from passive perception to active, cross-modal generation. Despite their unprecedented ability to synthesize information, a critical gap persists in evaluation: existing benchmarks primarily assess discriminative understanding or unconstrained image generation separately, failing to measure the integrated cognitive process of generative reasoning. To bridge this gap, we propose that geometric construction provides an ideal testbed as it inherently demands a fusion of language comprehension and precise visual generation. We introduce GGBench, a benchmark designed specifically to evaluate geometric generative reasoning. It provides a comprehensive framework for systematically diagnosing a model’s ability to not only understand and reason but to actively construct a solution, thereby setting a more rigorous standard for the next generation of intelligent systems. Project website: https://opendatalab-raiser.github.io/GGBench/.

Similar Work