Towards a Transparent and Interpretable AI Model for Medical Image Classifications
By: Binbin Wen , Yihang Wu , Tareef Daqqaq and more
Potential Business Impact:
Makes AI doctors explain their choices clearly.
The integration of artificial intelligence (AI) into medicine is remarkable, offering advanced diagnostic and therapeutic possibilities. However, the inherent opacity of complex AI models presents significant challenges to their clinical practicality. This paper focuses primarily on investigating the application of explainable artificial intelligence (XAI) methods, with the aim of making AI decisions transparent and interpretable. Our research focuses on implementing simulations using various medical datasets to elucidate the internal workings of the XAI model. These dataset-driven simulations demonstrate how XAI effectively interprets AI predictions, thus improving the decision-making process for healthcare professionals. In addition to a survey of the main XAI methods and simulations, ongoing challenges in the XAI field are discussed. The study highlights the need for the continuous development and exploration of XAI, particularly from the perspective of diverse medical datasets, to promote its adoption and effectiveness in the healthcare domain.
Similar Papers
Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview
CV and Pattern Recognition
Shows doctors why computers think images are sick.
Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
Artificial Intelligence
Makes food machines explain why food is good.
Explaining What Machines See: XAI Strategies in Deep Object Detection Models
CV and Pattern Recognition
Shows how smart computers "see" to make them trustworthy.