Cost Efficient Fairness Audit Under Partial Feedback
By: Nirjhar Das , Mohit Sharma , Praharsh Nanavati and more
Potential Business Impact:
Find unfairness in decisions, saving money.
We study the problem of auditing the fairness of a given classifier under partial feedback, where true labels are available only for positively classified individuals, (e.g., loan repayment outcomes are observed only for approved applicants). We introduce a novel cost model for acquiring additional labeled data, designed to more accurately reflect real-world costs such as credit assessment, loan processing, and potential defaults. Our goal is to find optimal fairness audit algorithms that are more cost-effective than random exploration and natural baselines. In our work, we consider two audit settings: a black-box model with no assumptions on the data distribution, and a mixture model, where features and true labels follow a mixture of exponential family distributions. In the black-box setting, we propose a near-optimal auditing algorithm under mild assumptions and show that a natural baseline can be strictly suboptimal. In the mixture model setting, we design a novel algorithm that achieves significantly lower audit cost than the black-box case. Our approach leverages prior work on learning from truncated samples and maximum-a-posteriori oracles, and extends known results on spherical Gaussian mixtures to handle exponential family mixtures, which may be of independent interest. Moreover, our algorithms apply to popular fairness metrics including demographic parity, equal opportunity, and equalized odds. Empirically, we demonstrate strong performance of our algorithms on real-world fair classification datasets like Adult Income and Law School, consistently outperforming natural baselines by around 50% in terms of audit cost.
Similar Papers
Reliable fairness auditing with semi-supervised inference
Methodology
Find unfairness in computer health helpers.
Fairness is in the details: Face Dataset Auditing
Applications
Finds unfairness in pictures used to train AI.
Beyond Internal Data: Bounding and Estimating Fairness from Incomplete Data
Machine Learning (CS)
Tests AI fairness using separate data sources.