Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation
By: Guoqing Luo , Iffat Maab , Lili Mou and more
Potential Business Impact:
Fixes AI's thinking to stop unfair stereotypes.
While reasoning-based large language models excel at complex tasks through an internal, structured thinking process, a concerning phenomenon has emerged that such a thinking process can aggregate social stereotypes, leading to biased outcomes. However, the underlying behaviours of these language models in social bias scenarios remain underexplored. In this work, we systematically investigate mechanisms within the thinking process behind this phenomenon and uncover two failure patterns that drive social bias aggregation: 1) stereotype repetition, where the model relies on social stereotypes as its primary justification, and 2) irrelevant information injection, where it fabricates or introduces new details to support a biased narrative. Building on these insights, we introduce a lightweight prompt-based mitigation approach that queries the model to review its own initial reasoning against these specific failure patterns. Experiments on question answering (BBQ and StereoSet) and open-ended (BOLD) benchmarks show that our approach effectively reduces bias while maintaining or improving accuracy.
Similar Papers
BiasCause: Evaluate Socially Biased Causal Reasoning of Large Language Models
Computation and Language
Finds why computers say unfair things.
Do Biased Models Have Biased Thoughts?
Computation and Language
Fixes computer "thinking" to be less unfair.
Soft Inductive Bias Approach via Explicit Reasoning Perspectives in Inappropriate Utterance Detection Using Large Language Models
Computation and Language
Catches mean online talk to make games safer.