Vision-language Models For Vision Tasks: A Survey | Awesome LLM Papers Add your paper to Awesome LLM Papers

Vision-language Models For Vision Tasks: A Survey

Jingyi Zhang, Jiaxing Huang, Sheng Jin, Shijian Lu . IEEE Transactions on Pattern Analysis and Machine Intelligence 2023 – 381 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets Efficiency Fine Tuning Has Code Image Text Integration Interdisciplinary Approaches Llm For Code Multimodal Semantic Representation Neural Machine Translation Survey Paper Training Techniques Variational Autoencoders Visual Contextualization Visual Question Answering

Most visual recognition studies rely heavily on crowd-labelled data in deep neural networks (DNNs) training, and they usually train a DNN for each single visual recognition task, leading to a laborious and time-consuming visual recognition paradigm. To address the two challenges, Vision-Language Models (VLMs) have been intensively investigated recently, which learns rich vision-language correlation from web-scale image-text pairs that are almost infinitely available on the Internet and enables zero-shot predictions on various visual recognition tasks with a single VLM. This paper provides a systematic review of visual language models for various visual recognition tasks, including: (1) the background that introduces the development of visual recognition paradigms; (2) the foundations of VLM that summarize the widely-adopted network architectures, pre-training objectives, and downstream tasks; (3) the widely-adopted datasets in VLM pre-training and evaluations; (4) the review and categorization of existing VLM pre-training methods, VLM transfer learning methods, and VLM knowledge distillation methods; (5) the benchmarking, analysis and discussion of the reviewed methods; (6) several research challenges and potential research directions that could be pursued in the future VLM studies for visual recognition. A project associated with this survey has been created at https://github.com/jingyi0000/VLM_survey.

Similar Work