Beyond Occlusion: In Search for Near Real-Time Explainability of CNN-Based Prostate Cancer Classification
By: Martin Krebs , Jan Obdržálek , Vít Musil and more
Deep neural networks are starting to show their worth in critical applications such as assisted cancer diagnosis. However, for their outputs to get accepted in practice, the results they provide should be explainable in a way easily understood by pathologists. A well-known and widely used explanation technique is occlusion, which, however, can take a long time to compute, thus slowing the development and interaction with pathologists. In this work, we set out to find a faster replacement for occlusion in a successful system for detecting prostate cancer. Since there is no established framework for comparing the performance of various explanation methods, we first identified suitable comparison criteria and selected corresponding metrics. Based on the results, we were able to choose a different explanation method, which cut the previously required explanation time at least by a factor of 10, without any negative impact on the quality of outputs. This speedup enables rapid iteration in model development and debugging and brings us closer to adopting AI-assisted prostate cancer detection in clinical settings. We propose that our approach to finding the replacement for occlusion can be used to evaluate candidate methods in other related applications.
Similar Papers
Explaining Digital Pathology Models via Clustering Activations
CV and Pattern Recognition
Shows doctors how computers see diseases in slides.
Explainable AI Technique in Lung Cancer Detection Using Convolutional Neural Networks
Image and Video Processing
Finds lung cancer early from X-rays.
A Clinically Interpretable Deep CNN Framework for Early Chronic Kidney Disease Prediction Using Grad-CAM-Based Explainable AI
CV and Pattern Recognition
Finds kidney disease early from CT scans.