Incorporating Fairness Constraints into Archetypal Analysis
By: Aleix Alcacer, Irene Epifanio
Potential Business Impact:
Makes computer models fairer by hiding private details.
Archetypal Analysis (AA) is an unsupervised learning method that represents data as convex combinations of extreme patterns called archetypes. While AA provides interpretable and low-dimensional representations, it can inadvertently encode sensitive attributes, leading to fairness concerns. In this work, we propose Fair Archetypal Analysis (FairAA), a modified formulation that explicitly reduces the influence of sensitive group information in the learned projections. We also introduce FairKernelAA, a nonlinear extension that addresses fairness in more complex data distributions. Our approach incorporates a fairness regularization term while preserving the structure and interpretability of the archetypes. We evaluate FairAA and FairKernelAA on synthetic datasets, including linear, nonlinear, and multi-group scenarios, demonstrating their ability to reduce group separability -- as measured by mean maximum discrepancy and linear separability -- without substantially compromising explained variance. We further validate our methods on the real-world ANSUR I dataset, confirming their robustness and practical utility. The results show that FairAA achieves a favorable trade-off between utility and fairness, making it a promising tool for responsible representation learning in sensitive applications.
Similar Papers
A Survey on Archetypal Analysis
Methodology
Finds simple patterns in complicated information.
Archetypal Analysis for Binary Data
Machine Learning (CS)
Finds hidden patterns in yes/no data.
Fairness-aware Anomaly Detection via Fair Projection
Machine Learning (CS)
Makes AI fairer by spotting bad data for everyone.