Causal SHAP: Feature Attribution with Dependency Awareness through Causal Discovery
By: Woon Yee Ng , Li Rong Wang , Siyuan Liu and more
Potential Business Impact:
Shows what *really* makes computer guesses happen.
Explaining machine learning (ML) predictions has become crucial as ML models are increasingly deployed in high-stakes domains such as healthcare. While SHapley Additive exPlanations (SHAP) is widely used for model interpretability, it fails to differentiate between causality and correlation, often misattributing feature importance when features are highly correlated. We propose Causal SHAP, a novel framework that integrates causal relationships into feature attribution while preserving many desirable properties of SHAP. By combining the Peter-Clark (PC) algorithm for causal discovery and the Intervention Calculus when the DAG is Absent (IDA) algorithm for causal strength quantification, our approach addresses the weakness of SHAP. Specifically, Causal SHAP reduces attribution scores for features that are merely correlated with the target, as validated through experiments on both synthetic and real-world datasets. This study contributes to the field of Explainable AI (XAI) by providing a practical framework for causal-aware model explanations. Our approach is particularly valuable in domains such as healthcare, where understanding true causal relationships is critical for informed decision-making.
Similar Papers
Combining SHAP and Causal Analysis for Interpretable Fault Detection in Industrial Processes
Machine Learning (CS)
Finds why machines break down, not just that they do.
SHAP-Based Supervised Clustering for Sample Classification and the Generalized Waterfall Plot
Machine Learning (CS)
Shows why computers make certain decisions.
ContextualSHAP : Enhancing SHAP Explanations Through Contextual Language Generation
Artificial Intelligence
Explains AI decisions in simple words for everyone.