Score: 1

Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint

Published: November 14, 2025 | arXiv ID: 2511.11294v1

By: Bertille Tierny, Arthur Charpentier, François Hu

Potential Business Impact:

Shows how computer decisions unfairly favor some groups.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
17 pages

Category
Statistics:
Machine Learning (Stat)