BLIP3-KALE: Knowledge Augmented Large-scale Dense Captions | Awesome LLM Papers Add your paper to Awesome LLM Papers

BLIP3-KALE: Knowledge Augmented Large-scale Dense Captions

Anas Awadalla, Le Xue, Manli Shu, An Yan, Jun Wang, Senthil Purushwalkam, Sheng Shen, Hannah Lee, Oscar Lo, Jae Sung Park, Etash Guha, Silvio Savarese, Ludwig Schmidt, Yejin Choi, Caiming Xiong, Ran Xu . No Venue 2024

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets Image Text Integration Interdisciplinary Approaches Multimodal Semantic Representation Training Techniques Visual Contextualization

We introduce BLIP3-KALE, a dataset of 218 million image-text pairs that bridges the gap between descriptive synthetic captions and factual web-scale alt-text. KALE augments synthetic dense image captions with web-scale alt-text to generate factually grounded image captions. Our two-stage approach leverages large vision-language models and language models to create knowledge-augmented captions, which are then used to train a specialized VLM for scaling up the dataset. We train vision-language models on KALE and demonstrate improvements on vision-language tasks. Our experiments show the utility of KALE for training more capable and knowledgeable multimodal models. We release the KALE dataset at https://huggingface.co/datasets/Salesforce/blip3-kale

Similar Work