Balancing Fairness and Performance in Healthcare AI: A Gradient Reconciliation Approach
By: Xiaoyang Wang, Christopher C. Yang
Potential Business Impact:
Makes medical AI fair for everyone.
The rapid growth of healthcare data and advances in computational power have accelerated the adoption of artificial intelligence (AI) in medicine. However, AI systems deployed without explicit fairness considerations risk exacerbating existing healthcare disparities, potentially leading to inequitable resource allocation and diagnostic disparities across demographic subgroups. To address this challenge, we propose FairGrad, a novel gradient reconciliation framework that automatically balances predictive performance and multi-attribute fairness optimization in healthcare AI models. Our method resolves conflicting optimization objectives by projecting each gradient vector onto the orthogonal plane of the others, thereby regularizing the optimization trajectory to ensure equitable consideration of all objectives. Evaluated on diverse real-world healthcare datasets and predictive tasks - including Substance Use Disorder (SUD) treatment and sepsis mortality - FairGrad achieved statistically significant improvements in multi-attribute fairness metrics (e.g., equalized odds) while maintaining competitive predictive accuracy. These results demonstrate the viability of harmonizing fairness and utility in mission-critical medical AI applications.
Similar Papers
Evaluating and Mitigating Bias in AI-Based Medical Text Generation
Computation and Language
Makes AI medical reports fair for everyone.
On the Interplay of Human-AI Alignment,Fairness, and Performance Trade-offs in Medical Imaging
CV and Pattern Recognition
Helps AI see patients fairly, not just some.
One Size Fits None: Rethinking Fairness in Medical AI
Machine Learning (CS)
Checks if AI doctors treat everyone fairly.