Mcpeval: Automatic Mcp-based Deep Evaluation For AI Agent Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Mcpeval: Automatic Mcp-based Deep Evaluation For AI Agent Models

Zhiwei Liu, Jielin Qiu, Shiyu Wang, Jianguo Zhang, Zuxin Liu, Roshan Ram, Haolin Chen, Weiran Yao, Huan Wang, Shelby Heinecke, Silvio Savarese, Caiming Xiong . No Venue 2025

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Agentic Compositional Generalization Datasets Evaluation Frameworks Evaluation Has Code Interdisciplinary Approaches Multimodal Semantic Representation Tools

The rapid rise of Large Language Models (LLMs)-based intelligent agents underscores the need for robust, scalable evaluation frameworks. Existing methods rely on static benchmarks and labor-intensive data collection, limiting practical assessment. We introduce \oursystemname, an open-source Model Context Protocol (MCP)-based framework that automates end-to-end task generation and deep evaluation of LLM agents across diverse domains. MCPEval standardizes metrics, seamlessly integrates with native agent tools, and eliminates manual effort in building evaluation pipelines. Empirical results across five real-world domains show its effectiveness in revealing nuanced, domain-specific performance. We publicly release MCPEval https://github.com/SalesforceAIResearch/MCPEval to promote reproducible and standardized LLM agent evaluation.

Similar Work