Trustworthy Medical Imaging with Large Language Models: A Study of Hallucinations Across Modalities
By: Anindya Bijoy Das, Shahnewaz Karim Sakib, Shibbir Ahmed
Potential Business Impact:
Fixes AI mistakes in medical pictures.
Large Language Models (LLMs) are increasingly applied to medical imaging tasks, including image interpretation and synthetic image generation. However, these models often produce hallucinations, which are confident but incorrect outputs that can mislead clinical decisions. This study examines hallucinations in two directions: image to text, where LLMs generate reports from X-ray, CT, or MRI scans, and text to image, where models create medical images from clinical prompts. We analyze errors such as factual inconsistencies and anatomical inaccuracies, evaluating outputs using expert informed criteria across imaging modalities. Our findings reveal common patterns of hallucination in both interpretive and generative tasks, with implications for clinical reliability. We also discuss factors contributing to these failures, including model architecture and training data. By systematically studying both image understanding and generation, this work provides insights into improving the safety and trustworthiness of LLM driven medical imaging systems.
Similar Papers
Exploring Causes and Mitigation of Hallucinations in Large Vision Language Models
CV and Pattern Recognition
Stops computers from making up fake things in pictures.
A Survey of Multimodal Hallucination Evaluation and Detection
CV and Pattern Recognition
Fixes AI that makes up fake things.
A comprehensive taxonomy of hallucinations in Large Language Models
Computation and Language
Makes AI tell the truth, not make things up.