Llava-plus: Learning To Use Tools For Creating Multimodal Agents | Awesome LLM Papers Contribute to Awesome LLM Papers

Llava-plus: Learning To Use Tools For Creating Multimodal Agents

Shilong Liu, Hao Cheng, Haotian Liu, Hao Zhang, Feng Li, Tianhe Ren, Xueyan Zou, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang, Jianfeng Gao, Chunyuan Li . No Venue 2023

[Paper] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Has Code RAG Tools

LLaVA-Plus is a general-purpose multimodal assistant that expands the capabilities of large multimodal models. It maintains a skill repository of pre-trained vision and vision-language models and can activate relevant tools based on users’ inputs to fulfill real-world tasks. LLaVA-Plus is trained on multimodal instruction-following data to acquire the ability to use tools, covering visual understanding, generation, external knowledge retrieval, and compositions. Empirical results show that LLaVA-Plus outperforms LLaVA in existing capabilities and exhibits new ones. It is distinct in that the image query is directly grounded and actively engaged throughout the entire human-AI interaction sessions, significantly improving tool use performance and enabling new scenarios.

https://huggingface.co/discussions/paper/654db59df5297ada0b9cd405

Similar Work