Explaining Digital Pathology Models via Clustering Activations
By: Adam Bajger , Jan Obdržálek , Vojtěch Kůr and more
Potential Business Impact:
Shows doctors how computers see diseases in slides.
We present a clustering-based explainability technique for digital pathology models based on convolutional neural networks. Unlike commonly used methods based on saliency maps, such as occlusion, GradCAM, or relevance propagation, which highlight regions that contribute the most to the prediction for a single slide, our method shows the global behaviour of the model under consideration, while also providing more fine-grained information. The result clusters can be visualised not only to understand the model, but also to increase confidence in its operation, leading to faster adoption in clinical practice. We also evaluate the performance of our technique on an existing model for detecting prostate cancer, demonstrating its usefulness.
Similar Papers
Beyond Occlusion: In Search for Near Real-Time Explainability of CNN-Based Prostate Cancer Classification
CV and Pattern Recognition
Finds cancer faster, helping doctors diagnose sooner.
Through the Static: Demystifying Malware Visualization via Explainability
Cryptography and Security
Helps computers spot bad files by showing how they think.
Information-driven Fusion of Pathology Foundation Models for Enhanced Disease Characterization
CV and Pattern Recognition
Combines AI to better find cancer in pictures.