Explainable Artificial Intelligence in Biomedical Image Analysis: A Comprehensive Survey
By: Getamesay Haile Dagnaw , Yanming Zhu , Muhammad Hassan Maqsood and more
Potential Business Impact:
Helps doctors understand medical pictures better.
Explainable artificial intelligence (XAI) has become increasingly important in biomedical image analysis to promote transparency, trust, and clinical adoption of DL models. While several surveys have reviewed XAI techniques, they often lack a modality-aware perspective, overlook recent advances in multimodal and vision-language paradigms, and provide limited practical guidance. This survey addresses this gap through a comprehensive and structured synthesis of XAI methods tailored to biomedical image analysis.We systematically categorize XAI methods, analyzing their underlying principles, strengths, and limitations within biomedical contexts. A modality-centered taxonomy is proposed to align XAI methods with specific imaging types, highlighting the distinct interpretability challenges across modalities. We further examine the emerging role of multimodal learning and vision-language models in explainable biomedical AI, a topic largely underexplored in previous work. Our contributions also include a summary of widely used evaluation metrics and open-source frameworks, along with a critical discussion of persistent challenges and future directions. This survey offers a timely and in-depth foundation for advancing interpretable DL in biomedical image analysis.
Similar Papers
Towards a Transparent and Interpretable AI Model for Medical Image Classifications
CV and Pattern Recognition
Makes AI doctors explain their choices clearly.
Explainable artificial intelligence (XAI): from inherent explainability to large language models
Machine Learning (CS)
Lets people understand why computers make choices.
A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems
Machine Learning (CS)
Helps doctors trust AI for better patient care.