Vision-language Models For Medical Report Generation And Visual Question Answering: A Review | Awesome LLM Papers Add your paper to Awesome LLM Papers

Vision-language Models For Medical Report Generation And Visual Question Answering: A Review

Iryna Hartsock, Ghulam Rasool . Frontiers in Artificial Intelligence 2024 – 52 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Applications Compositional Generalization Content Enrichment Datasets Evaluation Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Privacy Productivity Enhancement Question Answering Training Techniques Visual Contextualization Visual Question Answering

Medical vision-language models (VLMs) combine computer vision (CV) and natural language processing (NLP) to analyze visual and textual medical data. Our paper reviews recent advancements in developing VLMs specialized for healthcare, focusing on models designed for medical report generation and visual question answering (VQA). We provide background on NLP and CV, explaining how techniques from both fields are integrated into VLMs to enable learning from multimodal data. Key areas we address include the exploration of medical vision-language datasets, in-depth analyses of architectures and pre-training strategies employed in recent noteworthy medical VLMs, and comprehensive discussion on evaluation metrics for assessing VLMs’ performance in medical report generation and VQA. We also highlight current challenges and propose future directions, including enhancing clinical validity and addressing patient privacy concerns. Overall, our review summarizes recent progress in developing VLMs to harness multimodal medical data for improved healthcare applications.

Similar Work