Score: 0

DIF: A Framework for Benchmarking and Verifying Implicit Bias in LLMs

Published: May 15, 2025 | arXiv ID: 2505.10013v1

By: Lake Yin, Fan Huang

Potential Business Impact:

Finds unfairness in computer brains' answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As Large Language Models (LLMs) have risen in prominence over the past few years, there has been concern over the potential biases in LLMs inherited from the training data. Previous studies have examined how LLMs exhibit implicit bias, such as when response generation changes when different social contexts are introduced. We argue that this implicit bias is not only an ethical, but also a technical issue, as it reveals an inability of LLMs to accommodate extraneous information. However, unlike other measures of LLM intelligence, there are no standard methods to benchmark this specific subset of LLM bias. To bridge this gap, we developed a method for calculating an easily interpretable benchmark, DIF (Demographic Implicit Fairness), by evaluating preexisting LLM logic and math problem datasets with sociodemographic personas. We demonstrate that this method can statistically validate the presence of implicit bias in LLM behavior and find an inverse trend between question answering accuracy and implicit bias, supporting our argument.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Computation and Language