Relative Bias: A Comparative Framework for Quantifying Bias in LLMs
By: Alireza Arbabi, Florian Kerschbaum
Potential Business Impact:
Finds unfairness in AI by comparing them.
The growing deployment of large language models (LLMs) has amplified concerns regarding their inherent biases, raising critical questions about their fairness, safety, and societal impact. However, quantifying LLM bias remains a fundamental challenge, complicated by the ambiguity of what "bias" entails. This challenge grows as new models emerge rapidly and gain widespread use, while introducing potential biases that have not been systematically assessed. In this paper, we propose the Relative Bias framework, a method designed to assess how an LLM's behavior deviates from other LLMs within a specified target domain. We introduce two complementary methodologies: (1) Embedding Transformation analysis, which captures relative bias patterns through sentence representations over the embedding space, and (2) LLM-as-a-Judge, which employs a language model to evaluate outputs comparatively. Applying our framework to several case studies on bias and alignment scenarios following by statistical tests for validation, we find strong alignment between the two scoring methods, offering a systematic, scalable, and statistically grounded approach for comparative bias analysis in LLMs.
Similar Papers
No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language Models
Computation and Language
Finds and fixes unfairness in AI language.
Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-Judge
Computation and Language
Tests AI for unfairness, making it safer.
What's Not Said Still Hurts: A Description-Based Evaluation Framework for Measuring Social Bias in LLMs
Computation and Language
Finds hidden unfairness in AI's words.