Score: 0

Explaining Models under Multivariate Bernoulli Distribution via Hoeffding Decomposition

Published: October 8, 2025 | arXiv ID: 2510.07088v1

By: Baptiste Ferrere , Nicolas Bousquet , Fabrice Gamboa and more

Potential Business Impact:

Explains how computer guesses work, even with tricky inputs.

Business Areas:
A/B Testing Data and Analytics

Explaining the behavior of predictive models with random inputs can be achieved through sub-models decomposition, where such sub-models have easier interpretable features. Arising from the uncertainty quantification community, recent results have demonstrated the existence and uniqueness of a generalized Hoeffding decomposition for such predictive models when the stochastic input variables are correlated, based on concepts of oblique projection onto L 2 subspaces. This article focuses on the case where the input variables have Bernoulli distributions and provides a complete description of this decomposition. We show that in this case the underlying L 2 subspaces are one-dimensional and that the functional decomposition is explicit. This leads to a complete interpretability framework and theoretically allows reverse engineering. Explicit indicators of the influence of inputs on the output prediction (exemplified by Sobol' indices and Shapley effects) can be explicitly derived. Illustrated by numerical experiments, this type of analysis proves useful for addressing decision-support problems, based on binary decision diagrams, Boolean networks or binary neural networks. The article outlines perspectives for exploring high-dimensional settings and, beyond the case of binary inputs, extending these findings to models with finite countable inputs.

Country of Origin
🇫🇷 France

Page Count
28 pages

Category
Statistics:
Machine Learning (Stat)