Score: 0

Hidden Monotonicity: Explaining Deep Neural Networks via their DC Decomposition

Published: January 12, 2026 | arXiv ID: 2601.07700v1

By: Jakob Paul Zimmermann, Georg Loho

Potential Business Impact:

Makes AI show why it makes decisions.

Business Areas:
Multi-level Marketing Sales and Marketing

It has been demonstrated in various contexts that monotonicity leads to better explainability in neural networks. However, not every function can be well approximated by a monotone neural network. We demonstrate that monotonicity can still be used in two ways to boost explainability. First, we use an adaptation of the decomposition of a trained ReLU network into two monotone and convex parts, thereby overcoming numerical obstacles from an inherent blowup of the weights in this procedure. Our proposed saliency methods -- SplitCAM and SplitLRP -- improve on state of the art results on both VGG16 and Resnet18 networks on ImageNet-S across all Quantus saliency metric categories. Second, we exhibit that training a model as the difference between two monotone neural networks results in a system with strong self-explainability properties.

Country of Origin
πŸ‡©πŸ‡ͺ Germany

Page Count
35 pages

Category
Computer Science:
CV and Pattern Recognition