Score: 0

Interpretable Fair Clustering

Published: November 26, 2025 | arXiv ID: 2511.21109v1

By: Mudi Jiang , Jiahui Zhou , Xinying Liu and more

Potential Business Impact:

Makes computer groups fair and easy to understand.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Fair clustering has gained increasing attention in recent years, especially in applications involving socially sensitive attributes. However, existing fair clustering methods often lack interpretability, limiting their applicability in high-stakes scenarios where understanding the rationale behind clustering decisions is essential. In this work, we address this limitation by proposing an interpretable and fair clustering framework, which integrates fairness constraints into the structure of decision trees. Our approach constructs interpretable decision trees that partition the data while ensuring fair treatment across protected groups. To further enhance the practicality of our framework, we also introduce a variant that requires no fairness hyperparameter tuning, achieved through post-pruning a tree constructed without fairness constraints. Extensive experiments on both real-world and synthetic datasets demonstrate that our method not only delivers competitive clustering performance and improved fairness, but also offers additional advantages such as interpretability and the ability to handle multiple sensitive attributes. These strengths enable our method to perform robustly under complex fairness constraints, opening new possibilities for equitable and transparent clustering.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)