Trustworthy Medical Imaging with Large Language Models: A Study of Hallucinations Across Modalities
By: Anindya Bijoy Das, Shahnewaz Karim Sakib, Shibbir Ahmed
Potential Business Impact:
Fixes AI mistakes in medical pictures.
Large Language Models (LLMs) are increasingly applied to medical imaging tasks, including image interpretation and synthetic image generation. However, these models often produce hallucinations, which are confident but incorrect outputs that can mislead clinical decisions. This study examines hallucinations in two directions: image to text, where LLMs generate reports from X-ray, CT, or MRI scans, and text to image, where models create medical images from clinical prompts. We analyze errors such as factual inconsistencies and anatomical inaccuracies, evaluating outputs using expert informed criteria across imaging modalities. Our findings reveal common patterns of hallucination in both interpretive and generative tasks, with implications for clinical reliability. We also discuss factors contributing to these failures, including model architecture and training data. By systematically studying both image understanding and generation, this work provides insights into improving the safety and trustworthiness of LLM driven medical imaging systems.
Similar Papers
A comprehensive taxonomy of hallucinations in Large Language Models
Computation and Language
Makes AI tell the truth, not make things up.
HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Machine Learning (CS)
Fixes robots that get confused by their surroundings.
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when it sees and talks.