Deeply Explainable Artificial Neural Network
By: David Zucker
Potential Business Impact:
Makes AI explain its medical image guesses.
While deep learning models have demonstrated remarkable success in numerous domains, their black-box nature remains a significant limitation, especially in critical fields such as medical image analysis and inference. Existing explainability methods, such as SHAP, LIME, and Grad-CAM, are typically applied post hoc, adding computational overhead and sometimes producing inconsistent or ambiguous results. In this paper, we present the Deeply Explainable Artificial Neural Network (DxANN), a novel deep learning architecture that embeds explainability ante hoc, directly into the training process. Unlike conventional models that require external interpretation methods, DxANN is designed to produce per-sample, per-feature explanations as part of the forward pass. Built on a flow-based framework, it enables both accurate predictions and transparent decision-making, and is particularly well-suited for image-based tasks. While our focus is on medical imaging, the DxANN architecture is readily adaptable to other data modalities, including tabular and sequential data. DxANN marks a step forward toward intrinsically interpretable deep learning, offering a practical solution for applications where trust and accountability are essential.
Similar Papers
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.
Explainable Hierarchical Deep Learning Neural Networks (Ex-HiDeNN)
Machine Learning (CS)
Finds simple math rules from messy data.
xDNN(ASP): Explanation Generation System for Deep Neural Networks powered by Answer Set Programming
Artificial Intelligence
Shows how computer brains make decisions.