General Post-Processing Framework for Fairness Adjustment of Machine Learning Models
By: Léandre Eberhard , Nirek Sharma , Filipp Shelobolin and more
Potential Business Impact:
Makes computer decisions fair without retraining.
As machine learning increasingly influences critical domains such as credit underwriting, public policy, and talent acquisition, ensuring compliance with fairness constraints is both a legal and ethical imperative. This paper introduces a novel framework for fairness adjustments that applies to diverse machine learning tasks, including regression and classification, and accommodates a wide range of fairness metrics. Unlike traditional approaches categorized as pre-processing, in-processing, or post-processing, our method adapts in-processing techniques for use as a post-processing step. By decoupling fairness adjustments from the model training process, our framework preserves model performance on average while enabling greater flexibility in model development. Key advantages include eliminating the need for custom loss functions, enabling fairness tuning using different datasets, accommodating proprietary models as black-box systems, and providing interpretable insights into the fairness adjustments. We demonstrate the effectiveness of this approach by comparing it to Adversarial Debiasing, showing that our framework achieves a comparable fairness/accuracy tradeoff on real-world datasets.
Similar Papers
Revisiting Pre-processing Group Fairness: A Modular Benchmarking Framework
Machine Learning (CS)
Makes computer decisions fairer by fixing bad data.
Explainable post-training bias mitigation with distribution-based fairness metrics
Machine Learning (CS)
Makes AI fair without retraining.
LoGoFair: Post-Processing for Local and Global Fairness in Federated Learning
Machine Learning (CS)
Makes AI fair for everyone, everywhere.