Evaluation Of Chatgpt Family Of Models For Biomedical Reasoning And Classification | Awesome LLM Papers Add your paper to Awesome LLM Papers

Evaluation Of Chatgpt Family Of Models For Biomedical Reasoning And Classification

Shan Chen, Yingya Li, Sheng Lu, Hoang van, Hugo Jwl Aerts, Guergana K. Savova, Danielle S. Bitterman . Journal of the American Medical Informatics Association 2024 – 48 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Evaluation Fine Tuning Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Prompting Tools

Recent advances in large language models (LLMs) have shown impressive ability in biomedical question-answering, but have not been adequately investigated for more specific biomedical applications. This study investigates the performance of LLMs such as the ChatGPT family of models (GPT-3.5s, GPT-4) in biomedical tasks beyond question-answering. Because no patient data can be passed to the OpenAI API public interface, we evaluated model performance with over 10000 samples as proxies for two fundamental tasks in the clinical domain - classification and reasoning. The first task is classifying whether statements of clinical and policy recommendations in scientific literature constitute health advice. The second task is causal relation detection from the biomedical literature. We compared LLMs with simpler models, such as bag-of-words (BoW) with logistic regression, and fine-tuned BioBERT models. Despite the excitement around viral ChatGPT, we found that fine-tuning for two fundamental NLP tasks remained the best strategy. The simple BoW model performed on par with the most complex LLM prompting. Prompt engineering required significant investment.

Similar Work