Score: 0

Trustworthy Medical Imaging with Large Language Models: A Study of Hallucinations Across Modalities

Published: August 9, 2025 | arXiv ID: 2508.07031v1

By: Anindya Bijoy Das, Shahnewaz Karim Sakib, Shibbir Ahmed

Potential Business Impact:

Fixes AI mistakes in medical pictures.

Large Language Models (LLMs) are increasingly applied to medical imaging tasks, including image interpretation and synthetic image generation. However, these models often produce hallucinations, which are confident but incorrect outputs that can mislead clinical decisions. This study examines hallucinations in two directions: image to text, where LLMs generate reports from X-ray, CT, or MRI scans, and text to image, where models create medical images from clinical prompts. We analyze errors such as factual inconsistencies and anatomical inaccuracies, evaluating outputs using expert informed criteria across imaging modalities. Our findings reveal common patterns of hallucination in both interpretive and generative tasks, with implications for clinical reliability. We also discuss factors contributing to these failures, including model architecture and training data. By systematically studying both image understanding and generation, this work provides insights into improving the safety and trustworthiness of LLM driven medical imaging systems.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing