The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models
By: Joshua Wolff Anderson, Shyam Visweswaran
Potential Business Impact:
Makes AI fair for everyone, not just some.
Trustworthy machine learning in healthcare requires strong predictive performance, fairness, and explanations. While it is known that improving fairness can affect predictive performance, little is known about how fairness improvements influence explainability, an essential ingredient for clinical trust. Clinicians may hesitate to rely on a model whose explanations shift after fairness constraints are applied. In this study, we examine how enhancing fairness through bias mitigation techniques reshapes Shapley-based feature rankings. We quantify changes in feature importance rankings after applying fairness constraints across three datasets: pediatric urinary tract infection risk, direct anticoagulant bleeding risk, and recidivism risk. We also evaluate multiple model classes on the stability of Shapley-based rankings. We find that increasing model fairness across racial subgroups can significantly alter feature importance rankings, sometimes in different ways across groups. These results highlight the need to jointly consider accuracy, fairness, and explainability in model assessment rather than in isolation.
Similar Papers
Argumentative Debates for Transparent Bias Detection [Technical Report]
Artificial Intelligence
Finds unfairness in AI by explaining its reasoning.
Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
Machine Learning (CS)
Makes AI fairer without hurting anyone's performance.
Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations
Machine Learning (CS)
Keeps loan decisions fair as data changes.