Proposalclip: Unsupervised Open-category Object Proposal Generation Via Exploiting CLIP Cues | Awesome LLM Papers Add your paper to Awesome LLM Papers

Proposalclip: Unsupervised Open-category Object Proposal Generation Via Exploiting CLIP Cues

Hengcan Shi, Munawar Hayat, Yicheng Wu, Jianfei Cai . 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022 – 51 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Datasets Vision Language

Object proposal generation is an important and fundamental task in computer vision. In this paper, we propose ProposalCLIP, a method towards unsupervised open-category object proposal generation. Unlike previous works which require a large number of bounding box annotations and/or can only generate proposals for limited object categories, our ProposalCLIP is able to predict proposals for a large variety of object categories without annotations, by exploiting CLIP (contrastive language-image pre-training) cues. Firstly, we analyze CLIP for unsupervised open-category proposal generation and design an objectness score based on our empirical analysis on proposal selection. Secondly, a graph-based merging module is proposed to solve the limitations of CLIP cues and merge fragmented proposals. Finally, we present a proposal regression module that extracts pseudo labels based on CLIP cues and trains a lightweight network to further refine proposals. Extensive experiments on PASCAL VOC, COCO and Visual Genome datasets show that our ProposalCLIP can better generate proposals than previous state-of-the-art methods. Our ProposalCLIP also shows benefits for downstream tasks, such as unsupervised object detection.

Similar Work