ScoresActivation: A New Activation Function for Model Agnostic Global Explainability by Design
By: Emanuel Covaci , Fabian Galis , Radu Balan and more
Potential Business Impact:
Makes AI explain its choices while learning.
Understanding the decision of large deep learning models is a critical challenge for building transparent and trustworthy systems. Although the current post hoc explanation methods offer valuable insights into feature importance, they are inherently disconnected from the model training process, limiting their faithfulness and utility. In this work, we introduce a novel differentiable approach to global explainability by design, integrating feature importance estimation directly into model training. Central to our method is the ScoresActivation function, a feature-ranking mechanism embedded within the learning pipeline. This integration enables models to prioritize features according to their contribution to predictive performance in a differentiable and end-to-end trainable manner. Evaluations across benchmark datasets show that our approach yields globally faithful, stable feature rankings aligned with SHAP values and ground-truth feature importance, while maintaining high predictive performance. Moreover, feature scoring is 150 times faster than the classical SHAP method, requiring only 2 seconds during training compared to SHAP's 300 seconds for feature ranking in the same configuration. Our method also improves classification accuracy by 11.24% with 10 features (5 relevant) and 29.33% with 16 features (5 relevant, 11 irrelevant), demonstrating robustness to irrelevant inputs. This work bridges the gap between model accuracy and interpretability, offering a scalable framework for inherently explainable machine learning.
Similar Papers
Rigorous Feature Importance Scores based on Shapley Value and Banzhaf Index
Artificial Intelligence
Helps AI understand why it's wrong.
The Feature Understandability Scale for Human-Centred Explainable AI: Assessing Tabular Feature Importance
Human-Computer Interaction
Helps AI explain itself using easy words.
The Feature Understandability Scale for Human-Centred Explainable AI: Assessing Tabular Feature Importance
Human-Computer Interaction
Helps AI explain itself using easy-to-understand parts.