Score: 0

Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation

Published: October 20, 2025 | arXiv ID: 2510.17062v1

By: Guoqing Luo , Iffat Maab , Lili Mou and more

Potential Business Impact:

Fixes AI's thinking to stop unfair stereotypes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While reasoning-based large language models excel at complex tasks through an internal, structured thinking process, a concerning phenomenon has emerged that such a thinking process can aggregate social stereotypes, leading to biased outcomes. However, the underlying behaviours of these language models in social bias scenarios remain underexplored. In this work, we systematically investigate mechanisms within the thinking process behind this phenomenon and uncover two failure patterns that drive social bias aggregation: 1) stereotype repetition, where the model relies on social stereotypes as its primary justification, and 2) irrelevant information injection, where it fabricates or introduces new details to support a biased narrative. Building on these insights, we introduce a lightweight prompt-based mitigation approach that queries the model to review its own initial reasoning against these specific failure patterns. Experiments on question answering (BBQ and StereoSet) and open-ended (BOLD) benchmarks show that our approach effectively reduces bias while maintaining or improving accuracy.

Country of Origin
🇨🇦 Canada

Page Count
15 pages

Category
Computer Science:
Computation and Language