Understanding Fairness and Prediction Error through Subspace Decomposition and Influence Analysis
By: Enze Shi , Pankaj Bhagwat , Zhixian Yang and more
Potential Business Impact:
Fixes computer bias to make fair decisions.
Machine learning models have achieved widespread success but often inherit and amplify historical biases, resulting in unfair outcomes. Traditional fairness methods typically impose constraints at the prediction level, without addressing underlying biases in data representations. In this work, we propose a principled framework that adjusts data representations to balance predictive utility and fairness. Using sufficient dimension reduction, we decompose the feature space into target-relevant, sensitive, and shared components, and control the fairness-utility trade-off by selectively removing sensitive information. We provide a theoretical analysis of how prediction error and fairness gaps evolve as shared subspaces are added, and employ influence functions to quantify their effects on the asymptotic behavior of parameter estimates. Experiments on both synthetic and real-world datasets validate our theoretical insights and show that the proposed method effectively improves fairness while preserving predictive performance.
Similar Papers
Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-Language Models
CV and Pattern Recognition
Fixes AI that unfairly judges people's pictures.
Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint
Machine Learning (Stat)
Shows how computer decisions unfairly favor some groups.
Developing Fairness-Aware Task Decomposition to Improve Equity in Post-Spinal Fusion Complication Prediction
Machine Learning (CS)
Helps doctors predict surgery risks fairly for everyone.