Post-processing for Fair Regression via Explainable SVD
By: Zhiqun Zuo, Ding Zhu, Mohammad Mahdi Khalili
Potential Business Impact:
Makes computer predictions fair for everyone.
This paper presents a post-processing algorithm for training fair neural network regression models that satisfy statistical parity, utilizing an explainable singular value decomposition (SVD) of the weight matrix. We propose a linear transformation of the weight matrix, whereby the singular values derived from the SVD of the transformed matrix directly correspond to the differences in the first and second moments of the output distributions across two groups. Consequently, we can convert the fairness constraints into constraints on the singular values. We analytically solve the problem of finding the optimal weights under these constraints. Experimental validation on various datasets demonstrates that our method achieves a similar or superior fairness-accuracy trade-off compared to the baselines without using the sensitive attribute at the inference time.
Similar Papers
FairLRF: Achieving Fairness through Sparse Low Rank Factorization
Machine Learning (CS)
Makes AI fairer without losing accuracy.
Explainable post-training bias mitigation with distribution-based fairness metrics
Machine Learning (CS)
Makes AI fair without retraining.
Hidden Convexity of Fair PCA and Fast Solver via Eigenvalue Optimization
Machine Learning (CS)
Makes computer learning fairer and faster.