Exploring specialization and sensitivity of convolutional neural networks in the context of simultaneous image augmentations
By: Pavel Kharyuk, Sergey Matveev, Ivan Oseledets
Potential Business Impact:
Helps computers explain their decisions like doctors.
Drawing parallels with the way biological networks are studied, we adapt the treatment--control paradigm to explainable artificial intelligence research and enrich it through multi-parametric input alterations. In this study, we propose a framework for investigating the internal inference impacted by input data augmentations. The internal changes in network operation are reflected in activation changes measured by variance, which can be decomposed into components related to each augmentation, employing Sobol indices and Shapley values. These quantities enable one to visualize sensitivity to different variables and use them for guided masking of activations. In addition, we introduce a way of single-class sensitivity analysis where the candidates are filtered according to their matching to prediction bias generated by targeted damaging of the activations. Relying on the observed parallels, we assume that the developed framework can potentially be transferred to studying biological neural networks in complex environments.
Similar Papers
Application of Sensitivity Analysis Methods for Studying Neural Network Models
Numerical Analysis
Shows how computer brains make decisions.
An approach based on class activation maps for investigating the effects of data augmentation on neural networks for image classification
Machine Learning (CS)
Helps computers see better by showing them more pictures.
Synthesizing Images on Perceptual Boundaries of ANNs for Uncovering and Manipulating Human Perceptual Variability
Artificial Intelligence
Predicts and changes how people see things.