Drafterbench: Benchmarking Large Language Models For Tasks Automation In Civil Engineering | Awesome LLM Papers Contribute to Awesome LLM Papers

Drafterbench: Benchmarking Large Language Models For Tasks Automation In Civil Engineering

Yinsheng Li, Zhen Dong, Yi Shao . No Venue 2025

[Code] [Paper] [Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Agentic Applications Datasets Evaluation Has Code Instruction Following Tools

Large Language Model (LLM) agents have shown great potential for solving real-world problems and promise to be a solution for tasks automation in industry. However, more benchmarks are needed to systematically evaluate automation agents from an industrial perspective, for example, in Civil Engineering. Therefore, we propose DrafterBench for the comprehensive evaluation of LLM agents in the context of technical drawing revision, a representation task in civil engineering. DrafterBench contains twelve types of tasks summarized from real-world drawing files, with 46 customized functions/tools and 1920 tasks in total. DrafterBench is an open-source benchmark to rigorously test AI agents’ proficiency in interpreting intricate and long-context instructions, leveraging prior knowledge, and adapting to dynamic instruction quality via implicit policy awareness. The toolkit comprehensively assesses distinct capabilities in structured data comprehension, function execution, instruction following, and critical reasoning. DrafterBench offers detailed analysis of task accuracy and error statistics, aiming to provide deeper insight into agent capabilities and identify improvement targets for integrating LLMs in engineering applications. Our benchmark is available at https://github.com/Eason-Li-AIS/DrafterBench, with the test set hosted at https://huggingface.co/datasets/Eason666/DrafterBench.

https://huggingface.co/discussions/paper/68785ee1001546c83aa4f96a

Similar Work