Aligned explanations in neural networks
By: Corentin Lobet, Francesca Chiaromonte
Potential Business Impact:
Makes AI's decisions easy to understand.
Feature attribution is the dominant paradigm for explaining deep neural networks. However, most existing methods only loosely reflect the model's prediction-making process, thereby merely white-painting the black box. We argue that explanatory alignment is a key aspect of trustworthiness in prediction tasks: explanations must be directly linked to predictions, rather than serving as post-hoc rationalizations. We present model readability as a design principle enabling alignment, and PiNets as a modeling framework to pursue it in a deep learning context. PiNets are pseudo-linear networks that produce instance-wise linear predictions in an arbitrary feature space, making them linearly readable. We illustrate their use on image classification and segmentation tasks, demonstrating how PiNets produce explanations that are faithful across multiple criteria in addition to alignment.
Similar Papers
Training Feature Attribution for Vision Models
CV and Pattern Recognition
Shows how bad training pictures trick computers.
Attribution Explanations for Deep Neural Networks: A Theoretical Perspective
Machine Learning (CS)
Makes AI decisions easier to understand.
NAEx: A Plug-and-Play Framework for Explaining Network Alignment
Machine Learning (CS)
Explains how computer networks match up.