CLIP Models Are Few-shot Learners: Empirical Studies On VQA And Visual Entailment | Awesome LLM Papers Contribute to Awesome LLM Papers

CLIP Models Are Few-shot Learners: Empirical Studies On VQA And Visual Entailment

Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei . Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022 – 71 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Uncategorized

CLIP has shown a remarkable zero-shot capability on a wide range of vision tasks. Previously, CLIP is only regarded as a powerful visual encoder. However, after being pre-trained by language supervision from a large amount of image-caption pairs, CLIP itself should also have acquired some few-shot abilities for vision-language tasks. In this work, we empirically show that CLIP can be a strong vision-language few-shot learner by leveraging the power of language. We first evaluate CLIP’s zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task. Then we propose a parameter-efficient fine-tuning strategy to boost the few-shot performance on the vqa task. We achieve competitive zero/few-shot results on the visual question answering and visual entailment tasks without introducing any additional pre-training procedure.

Similar Work