Infiguiagent: A Multimodal Generalist GUI Agent With Native Reasoning And Reflection | Awesome LLM Papers Add your paper to Awesome LLM Papers

Infiguiagent: A Multimodal Generalist GUI Agent With Native Reasoning And Reflection

Yuhang Liu, Pengxiang Li, Zishu Wei, Congkai Xie, Xueyu Hu, Xinchen Xu, Shengyu Zhang, Xiaotian Han, Hongxia Yang, Fei Wu . No Venue 2025

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Agentic Compositional Generalization Fine Tuning Has Code Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Productivity Enhancement Visual Contextualization

Graphical User Interface (GUI) Agents, powered by multimodal large language models (MLLMs), have shown great potential for task automation on computing devices such as computers and mobile phones. However, existing agents face challenges in multi-step reasoning and reliance on textual annotations, limiting their effectiveness. We introduce InfiGUIAgent, an MLLM-based GUI Agent trained with a two-stage supervised fine-tuning pipeline. Stage 1 enhances fundamental skills such as GUI understanding and grounding, while Stage 2 integrates hierarchical reasoning and expectation-reflection reasoning skills using synthesized data to enable native reasoning abilities of the agents. InfiGUIAgent achieves competitive performance on several GUI benchmarks, highlighting the impact of native reasoning skills in enhancing GUI interaction for automation tasks. Resources are available at https://github.com/Reallm-Labs/InfiGUIAgent.

Similar Work