Interpretable Fair Clustering
By: Mudi Jiang , Jiahui Zhou , Xinying Liu and more
Potential Business Impact:
Makes computer groups fair and easy to understand.
Fair clustering has gained increasing attention in recent years, especially in applications involving socially sensitive attributes. However, existing fair clustering methods often lack interpretability, limiting their applicability in high-stakes scenarios where understanding the rationale behind clustering decisions is essential. In this work, we address this limitation by proposing an interpretable and fair clustering framework, which integrates fairness constraints into the structure of decision trees. Our approach constructs interpretable decision trees that partition the data while ensuring fair treatment across protected groups. To further enhance the practicality of our framework, we also introduce a variant that requires no fairness hyperparameter tuning, achieved through post-pruning a tree constructed without fairness constraints. Extensive experiments on both real-world and synthetic datasets demonstrate that our method not only delivers competitive clustering performance and improved fairness, but also offers additional advantages such as interpretability and the ability to handle multiple sensitive attributes. These strengths enable our method to perform robustly under complex fairness constraints, opening new possibilities for equitable and transparent clustering.
Similar Papers
Fairness-Aware and Interpretable Policy Learning
Econometrics
Makes computer decisions fair and understandable.
Adversarial Fair Multi-View Clustering
Machine Learning (CS)
Makes computer groups fair, not biased.
Argumentative Debates for Transparent Bias Detection [Technical Report]
Artificial Intelligence
Finds unfairness in AI by explaining its reasoning.