MultiFair: Multimodal Balanced Fairness-Aware Medical Classification with Dual-Level Gradient Modulation
By: Md Zubair , Hao Zheng , Nussdorf Jonathan and more
Potential Business Impact:
Makes medical AI fairer and more accurate.
Medical decision systems increasingly rely on data from multiple sources to ensure reliable and unbiased diagnosis. However, existing multimodal learning models fail to achieve this goal because they often ignore two critical challenges. First, various data modalities may learn unevenly, thereby converging to a model biased towards certain modalities. Second, the model may emphasize learning on certain demographic groups causing unfair performances. The two aspects can influence each other, as different data modalities may favor respective groups during optimization, leading to both imbalanced and unfair multimodal learning. This paper proposes a novel approach called MultiFair for multimodal medical classification, which addresses these challenges with a dual-level gradient modulation process. MultiFair dynamically modulates training gradients regarding the optimization direction and magnitude at both data modality and group levels. We conduct extensive experiments on two multimodal medical datasets with different demographic groups. The results show that MultiFair outperforms state-of-the-art multimodal learning and fairness learning methods.
Similar Papers
Fairness in Multi-modal Medical Diagnosis with Demonstration Selection
CV and Pattern Recognition
Makes AI see medical images fairly for everyone.
Fairness in Multi-modal Medical Diagnosis with Demonstration Selection
CV and Pattern Recognition
Makes AI see medical images fairly for everyone.
Revisit Modality Imbalance at the Decision Layer
Machine Learning (CS)
Fixes AI that favors one sense over another.