Score: 0

Understanding Fairness and Prediction Error through Subspace Decomposition and Influence Analysis

Published: October 27, 2025 | arXiv ID: 2510.23935v1

By: Enze Shi , Pankaj Bhagwat , Zhixian Yang and more

Potential Business Impact:

Fixes computer bias to make fair decisions.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Machine learning models have achieved widespread success but often inherit and amplify historical biases, resulting in unfair outcomes. Traditional fairness methods typically impose constraints at the prediction level, without addressing underlying biases in data representations. In this work, we propose a principled framework that adjusts data representations to balance predictive utility and fairness. Using sufficient dimension reduction, we decompose the feature space into target-relevant, sensitive, and shared components, and control the fairness-utility trade-off by selectively removing sensitive information. We provide a theoretical analysis of how prediction error and fairness gaps evolve as shared subspaces are added, and employ influence functions to quantify their effects on the asymptotic behavior of parameter estimates. Experiments on both synthetic and real-world datasets validate our theoretical insights and show that the proposed method effectively improves fairness while preserving predictive performance.

Country of Origin
🇨🇦 Canada

Page Count
18 pages

Category
Statistics:
Machine Learning (Stat)