MMM-fair: An Interactive Toolkit for Exploring and Operationalizing Multi-Fairness Trade-offs
By: Swati Swati , Arjun Roy , Emmanouil Panagiotou and more
Potential Business Impact:
Makes AI fairer by finding hidden unfairness.
Fairness-aware classification requires balancing performance and fairness, often intensified by intersectional biases. Conflicting fairness definitions further complicate the task, making it difficult to identify universally fair solutions. Despite growing regulatory and societal demands for equitable AI, popular toolkits offer limited support for exploring multi-dimensional fairness and related trade-offs. To address this, we present mmm-fair, an open-source toolkit leveraging boosting-based ensemble approaches that dynamically optimizes model weights to jointly minimize classification errors and diverse fairness violations, enabling flexible multi-objective optimization. The system empowers users to deploy models that align with their context-specific needs while reliably uncovering intersectional biases often missed by state-of-the-art methods. In a nutshell, mmm-fair uniquely combines in-depth multi-attribute fairness, multi-objective optimization, a no-code, chat-based interface, LLM-powered explanations, interactive Pareto exploration for model selection, custom fairness constraint definition, and deployment-ready models in a single open-source toolkit, a combination rarely found in existing fairness tools. Demo walkthrough available at: https://youtu.be/_rcpjlXFqkw.
Similar Papers
FairMT: Fairness for Heterogeneous Multi-Task Learning
Machine Learning (CS)
Makes AI fair for different jobs and missing info.
MultiFair: Multimodal Balanced Fairness-Aware Medical Classification with Dual-Level Gradient Modulation
Machine Learning (CS)
Makes medical AI fairer and more accurate.
mFARM: Towards Multi-Faceted Fairness Assessment based on HARMs in Clinical Decision Support
Artificial Intelligence
Helps doctors give fair and accurate patient care.