Fairness-aware Anomaly Detection via Fair Projection
By: Feng Xiao, Xiaoying Tang, Jicong Fan
Potential Business Impact:
Makes AI fairer by spotting bad data for everyone.
Unsupervised anomaly detection is a critical task in many high-social-impact applications such as finance, healthcare, social media, and cybersecurity, where demographics involving age, gender, race, disease, etc, are used frequently. In these scenarios, possible bias from anomaly detection systems can lead to unfair treatment for different groups and even exacerbate social bias. In this work, first, we thoroughly analyze the feasibility and necessary assumptions for ensuring group fairness in unsupervised anomaly detection. Second, we propose a novel fairness-aware anomaly detection method FairAD. From the normal training data, FairAD learns a projection to map data of different demographic groups to a common target distribution that is simple and compact, and hence provides a reliable base to estimate the density of the data. The density can be directly used to identify anomalies while the common target distribution ensures fairness between different groups. Furthermore, we propose a threshold-free fairness metric that provides a global view for model's fairness, eliminating dependence on manual threshold selection. Experiments on real-world benchmarks demonstrate that our method achieves an improved trade-off between detection accuracy and fairness under both balanced and skewed data across different groups.
Similar Papers
Quantifying Query Fairness Under Unawareness
Information Retrieval
Makes search results fair for everyone.
Unsupervised Surrogate Anomaly Detection
Machine Learning (CS)
Finds weird things in data.
Reliable fairness auditing with semi-supervised inference
Methodology
Find unfairness in computer health helpers.