Revisiting Classifier: Transferring Vision-language Models For Video Recognition | Awesome LLM Papers Add your paper to Awesome LLM Papers

Revisiting Classifier: Transferring Vision-language Models For Video Recognition

Wenhao Wu, Zhun Sun, Wanli Ouyang . Proceedings of the AAAI Conference on Artificial Intelligence 2022 – 69 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Efficiency Fine Tuning Model Architecture Vision Language

Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source vision-language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for video classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the different knowledge from pre-trained model. We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. Our simple yet effective tuning paradigm achieves state-of-the-art performance and efficient training on various video recognition scenarios, i.e., zero-shot, few-shot, general recognition. In particular, our paradigm achieves the state-of-the-art accuracy of 87.8% on Kinetics-400, and also surpasses previous methods by 20~50% absolute top-1 accuracy under zero-shot, few-shot settings on five popular video datasets. Code and models can be found at https://github.com/whwu95/Text4Vis .

Similar Work