Pandagpt: One Model To Instruction-follow Them All | Awesome LLM Papers Add your paper to Awesome LLM Papers

Pandagpt: One Model To Instruction-follow Them All

Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, Deng Cai . Arxiv 2023 – 45 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Has Code Image Text Integration Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Training Techniques Visual Contextualization

We present PandaGPT, an approach to emPower large lANguage moDels with visual and Auditory instruction-following capabilities. Our pilot experiments show that PandaGPT can perform complex tasks such as detailed image description generation, writing stories inspired by videos, and answering questions about audios. More interestingly, PandaGPT can take multimodal inputs simultaneously and compose their semantics naturally. For example, PandaGPT can connect how objects look in an image/video and how they sound in an audio. To do so, PandaGPT combines the multimodal encoders from ImageBind and the large language models from Vicuna. Notably, only aligned image-text pairs are required for the training of PandaGPT. Thanks to the strong capability of ImageBind in embedding data from different modalities into the same space, PandaGPT displays emergent, i.e. zero-shot, cross-modal behaviors for data other than image and text (e.g., video, audio, depth, thermal, and IMU). We hope that PandaGPT serves as an initial step toward building AGI that can perceive and understand inputs in different modalities holistically, as we humans do. Our project page is at https://panda-gpt.github.io/.

Similar Work