Roco: Dialectic Multi-robot Collaboration With Large Language Models · Awesome LLM Papers Contribute to LLM-Bible

Roco: Dialectic Multi-robot Collaboration With Large Language Models

Zhao Mandi, Shreeya Jain, Shuran Song. 2024 IEEE International Conference on Robotics and Automation (ICRA) 2023 – 32 citations

[Paper] [Code]    
Has Code Interpretability and Explainability Reinforcement Learning Prompting Agentic Evaluation

We propose a novel approach to multi-robot collaboration that harnesses the power of pre-trained large language models (LLMs) for both high-level communication and low-level path planning. Robots are equipped with LLMs to discuss and collectively reason task strategies. They then generate sub-task plans and task space waypoint paths, which are used by a multi-arm motion planner to accelerate trajectory planning. We also provide feedback from the environment, such as collision checking, and prompt the LLM agents to improve their plan and waypoints in-context. For evaluation, we introduce RoCoBench, a 6-task benchmark covering a wide range of multi-robot collaboration scenarios, accompanied by a text-only dataset for agent representation and reasoning. We experimentally demonstrate the effectiveness of our approach – it achieves high success rates across all tasks in RoCoBench and adapts to variations in task semantics. Our dialog setup offers high interpretability and flexibility – in real world experiments, we show RoCo easily incorporates human-in-the-loop, where a user can communicate and collaborate with a robot agent to complete tasks together. See project website https://project-roco.github.io for videos and code.

Similar Work