FairReason: Balancing Reasoning and Social Bias in MLLMs
By: Zhenyu Pan , Yutong Zhang , Jianshu Zhang and more
Potential Business Impact:
Makes AI smarter without being unfair.
Multimodal Large Language Models (MLLMs) already achieve state-of-the-art results across a wide range of tasks and modalities. To push their reasoning ability further, recent studies explore advanced prompting schemes and post-training fine-tuning. Although these techniques improve logical accuracy, they frequently leave the models' outputs burdened with pronounced social biases. Clarifying how reasoning gains interact with bias mitigation-and whether the two objectives inherently trade off-therefore remains an open and pressing research problem. Our study begins by benchmarking three bias-mitigation strategies-supervised fine-uning (SFT), knowledge distillation (KD), and rule-based reinforcement learning (RL)-under identical conditions, establishing their baseline strengths and weaknesses. Building on these results, we vary the proportion of debias-focused and reasoning-centric samples within each paradigm to chart the reasoning-versus-bias trade-off. Our sweeps reveal a consistent sweet spot: a roughly 1:4 mix trained with reinforcement learning cuts stereotype scores by 10% while retaining 88% of the model's original reasoning accuracy, offering concrete guidance for balancing fairness and capability in MLLMs.
Similar Papers
FairReason: Balancing Reasoning and Social Bias in MLLMs
Artificial Intelligence
Makes AI smarter without making it biased.
Reasoning Towards Fairness: Mitigating Bias in Language Models through Reasoning-Guided Fine-Tuning
Computation and Language
Makes AI less biased by teaching it to think.
Reassessing the Role of Supervised Fine-Tuning: An Empirical Study in VLM Reasoning
Machine Learning (CS)
Makes AI better at thinking, even small ones.