The Effect of Negation on CLIP in Medical Imaging: Limitations of Contrastive Language-Image Pretraining
By: Jasmine Vu, Shivanand Sheshappanavar
Large vision-language models like CLIP are increasingly used in medical imaging tasks due to their ability to align images and text without the need for extensive labeled data. This makes them particularly useful for applications like image retrieval, report generation, and classification in clinical settings. A potential issue to this approach is that CLIP-based models often under perform when interpreting negated phrases, which is especially problematic in the context of medical diagnosing. In this study, we evaluate the Stanford AIMI CheXagent model on its ability to correctly retrieve chest X-ray images using prompts with and without negation. The goal of this project is to understand where this model fails and then use it as a base model to improve its retrieval accuracy by fine tuning methods outlined in previous work. Results from this study show improvement in handling of negation in the CLIP model with a slight decrease in accuracy of positive prompt evaluation. Alongside retrieval accuracy, we examined internal model behavior through token attribution, t-SNE projection, and attention-head ablation to better characterize how each fine tuning approach reshaped the text encoders representation of negated clinical language. Through this work, we hope to better understand the internal behavior of CLIP and improve its handling of negation using clinically relevant language for improving its reliability in medical AI devices.
Similar Papers
Contrastive vision-language learning with paraphrasing and negation
CV and Pattern Recognition
Teaches computers to understand words that change meaning.
Know "No" Better: A Data-Driven Approach for Enhancing Negation Awareness in CLIP
CV and Pattern Recognition
Helps computers understand "no" in pictures.
Vision-Language Models Do Not Understand Negation
CV and Pattern Recognition
Teaches computers to understand "not" in pictures.