Evaluating and Mitigating Bias in AI-Based Medical Text Generation
By: Xiuying Chen , Tairan Wang , Juexiao Zhou and more
Potential Business Impact:
Makes AI medical reports fair for everyone.
Artificial intelligence (AI) systems, particularly those based on deep learning models, have increasingly achieved expert-level performance in medical applications. However, there is growing concern that such AI systems may reflect and amplify human bias, and reduce the quality of their performance in historically under-served populations. The fairness issue has attracted considerable research interest in the medical imaging classification field, yet it remains understudied in the text generation domain. In this study, we investigate the fairness problem in text generation within the medical field and observe significant performance discrepancies across different races, sexes, and age groups, including intersectional groups, various model scales, and different evaluation metrics. To mitigate this fairness issue, we propose an algorithm that selectively optimizes those underperformed groups to reduce bias. The selection rules take into account not only word-level accuracy but also the pathology accuracy to the target reference, while ensuring that the entire process remains fully differentiable for effective model training. Our evaluations across multiple backbones, datasets, and modalities demonstrate that our proposed algorithm enhances fairness in text generation without compromising overall performance. Specifically, the disparities among various groups across different metrics were diminished by more than 30% with our algorithm, while the relative change in text generation accuracy was typically within 2%. By reducing the bias generated by deep learning models, our proposed approach can potentially alleviate concerns about the fairness and reliability of text generation diagnosis in medical domain. Our code is publicly available to facilitate further research at https://github.com/iriscxy/GenFair.
Similar Papers
On the Interplay of Human-AI Alignment,Fairness, and Performance Trade-offs in Medical Imaging
CV and Pattern Recognition
Helps AI see patients fairly, not just some.
One Size Fits None: Rethinking Fairness in Medical AI
Machine Learning (CS)
Checks if AI doctors treat everyone fairly.
Exploring Bias in over 100 Text-to-Image Generative Models
CV and Pattern Recognition
Finds how AI art tools become unfair over time.