PRIOR: Prototype Representation Joint Learning From Medical Images And Reports | Awesome LLM Papers Add your paper to Awesome LLM Papers

PRIOR: Prototype Representation Joint Learning From Medical Images And Reports

Pujin Cheng, Li Lin, Junyan Lyu, Yijin Huang, Wenhan Luo, Xiaoying Tang . 2023 IEEE/CVF International Conference on Computer Vision (ICCV) 2023 – 49 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Compositional Generalization Datasets Few Shot Has Code ICCV Tools Training Techniques

Contrastive learning based vision-language joint pre-training has emerged as a successful representation learning strategy. In this paper, we present a prototype representation learning framework incorporating both global and local alignment between medical images and reports. In contrast to standard global multi-modality alignment methods, we employ a local alignment module for fine-grained representation. Furthermore, a cross-modality conditional reconstruction module is designed to interchange information across modalities in the training phase by reconstructing masked images and reports. For reconstructing long reports, a sentence-wise prototype memory bank is constructed, enabling the network to focus on low-level localized visual and high-level clinical linguistic features. Additionally, a non-auto-regressive generation paradigm is proposed for reconstructing non-sequential reports. Experimental results on five downstream tasks, including supervised classification, zero-shot classification, image-to-text retrieval, semantic segmentation, and object detection, show the proposed method outperforms other state-of-the-art methods across multiple datasets and under different dataset size settings. The code is available at https://github.com/QtacierP/PRIOR.

Similar Work