Knowledge-enhanced Visual-language Pre-training On Chest Radiology Images | Awesome LLM Papers Add your paper to Awesome LLM Papers

Knowledge-enhanced Visual-language Pre-training On Chest Radiology Images

Xiaoman Zhang, Chaoyi Wu, Ya Zhang, Yanfeng Wang, Weidi Xie . Nature Communications 2023 – 134 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Fine Tuning Vision Language

While multi-modal foundation models pre-trained on large-scale data have been successful in natural language understanding and vision recognition, their use in medical domains is still limited due to the fine-grained nature of medical tasks and the high demand for domain knowledge. To address this challenge, we propose a novel approach called Knowledge-enhanced Auto Diagnosis (KAD) which leverages existing medical domain knowledge to guide vision-language pre-training using paired chest X-rays and radiology reports. We evaluate KAD on {four} external X-ray datasets and demonstrate that its zero-shot performance is not only comparable to that of fully-supervised models, but also superior to the average of three expert radiologists for three (out of five) pathologies with statistical significance. Moreover, when few-shot annotation is available, KAD outperforms all existing approaches in fine-tuning settings, demonstrating its potential for application in different clinical scenarios.

Similar Work