Guiding LLM Decision-Making with Fairness Reward Models
By: Zara Hall , Melanie Subbiah , Thomas P Zollo and more
Potential Business Impact:
Makes AI make fairer, smarter decisions.
Large language models are increasingly used to support high-stakes decisions, potentially influencing who is granted bail or receives a loan. Naive chain-of-thought sampling can improve average decision accuracy, but has also been shown to amplify unfair bias. To address this challenge and enable the trustworthy use of reasoning models in high-stakes decision-making, we propose a framework for training a generalizable Fairness Reward Model (FRM). Our model assigns a fairness score to LLM reasoning, enabling the system to down-weight biased trajectories and favor equitable ones when aggregating decisions across reasoning chains. We show that a single Fairness Reward Model, trained on weakly supervised, LLM-annotated examples of biased versus unbiased reasoning, transfers across tasks, domains, and model families without additional fine-tuning. Applied to real-world decision-making tasks including recidivism prediction and social media moderation, we show that our approach consistently improves fairness while matching, or even surpassing, baseline accuracy.
Similar Papers
Towards Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models
Computation and Language
Makes AI fairer by checking its "thinking" parts.
FairReason: Balancing Reasoning and Social Bias in MLLMs
Artificial Intelligence
Makes AI smarter without making it biased.
Improving Fairness in LLMs Through Testing-Time Adversaries
Computation and Language
Makes AI fairer by spotting and fixing bias.