Score: 0

Incorporating Fairness Constraints into Archetypal Analysis

Published: July 16, 2025 | arXiv ID: 2507.12021v1

By: Aleix Alcacer, Irene Epifanio

Potential Business Impact:

Makes computer models fairer by hiding private details.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Archetypal Analysis (AA) is an unsupervised learning method that represents data as convex combinations of extreme patterns called archetypes. While AA provides interpretable and low-dimensional representations, it can inadvertently encode sensitive attributes, leading to fairness concerns. In this work, we propose Fair Archetypal Analysis (FairAA), a modified formulation that explicitly reduces the influence of sensitive group information in the learned projections. We also introduce FairKernelAA, a nonlinear extension that addresses fairness in more complex data distributions. Our approach incorporates a fairness regularization term while preserving the structure and interpretability of the archetypes. We evaluate FairAA and FairKernelAA on synthetic datasets, including linear, nonlinear, and multi-group scenarios, demonstrating their ability to reduce group separability -- as measured by mean maximum discrepancy and linear separability -- without substantially compromising explained variance. We further validate our methods on the real-world ANSUR I dataset, confirming their robustness and practical utility. The results show that FairAA achieves a favorable trade-off between utility and fairness, making it a promising tool for responsible representation learning in sensitive applications.

Country of Origin
🇪🇸 Spain

Page Count
9 pages

Category
Statistics:
Machine Learning (Stat)