DIF: A Framework for Benchmarking and Verifying Implicit Bias in LLMs
By: Lake Yin, Fan Huang
Potential Business Impact:
Finds unfairness in computer brains' answers.
As Large Language Models (LLMs) have risen in prominence over the past few years, there has been concern over the potential biases in LLMs inherited from the training data. Previous studies have examined how LLMs exhibit implicit bias, such as when response generation changes when different social contexts are introduced. We argue that this implicit bias is not only an ethical, but also a technical issue, as it reveals an inability of LLMs to accommodate extraneous information. However, unlike other measures of LLM intelligence, there are no standard methods to benchmark this specific subset of LLM bias. To bridge this gap, we developed a method for calculating an easily interpretable benchmark, DIF (Demographic Implicit Fairness), by evaluating preexisting LLM logic and math problem datasets with sociodemographic personas. We demonstrate that this method can statistically validate the presence of implicit bias in LLM behavior and find an inverse trend between question answering accuracy and implicit bias, supporting our argument.
Similar Papers
What's Not Said Still Hurts: A Description-Based Evaluation Framework for Measuring Social Bias in LLMs
Computation and Language
Finds hidden unfairness in AI's words.
Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution
Computation and Language
Finds AI unfairly favors some people over others.
Implicit Bias in LLMs: A Survey
Computation and Language
Finds hidden unfairness in AI language.