Mixtures of Transparent Local Models
By: Niffa Cheick Oumar Diaby, Thierry Duchesne, Mario Marchand
The predominance of machine learning models in many spheres of human activity has led to a growing demand for their transparency. The transparency of models makes it possible to discern some factors, such as security or non-discrimination. In this paper, we propose a mixture of transparent local models as an alternative solution for designing interpretable (or transparent) models. Our approach is designed for the situations where a simple and transparent function is suitable for modeling the label of instances in some localities/regions of the input space, but may change abruptly as we move from one locality to another. Consequently, the proposed algorithm is to learn both the transparent labeling function and the locality of the input space where the labeling function achieves a small risk in its assigned locality. By using a new multi-predictor (and multi-locality) loss function, we established rigorous PAC-Bayesian risk bounds for the case of binary linear classification problem and that of linear regression. In both cases, synthetic data sets were used to illustrate how the learning algorithms work. The results obtained from real data sets highlight the competitiveness of our approach compared to other existing methods as well as certain opaque models. Keywords: PAC-Bayes, risk bounds, local models, transparent models, mixtures of local transparent models.
Similar Papers
On the Trade-Off Between Transparency and Security in Adversarial Machine Learning
Machine Learning (CS)
Makes AI safer by hiding its secrets.
Locally Pareto-Optimal Interpretations for Black-Box Machine Learning Models
Machine Learning (CS)
Makes AI decisions understandable and accurate.
Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?
Machine Learning (CS)
Lets people fix computer guesses, making them happier.