Bias Testing and Mitigation in Black Box LLMs using Metamorphic Relations
By: Sina Salimian , Gias Uddin , Sumon Biswas and more
Potential Business Impact:
Finds and fixes hidden unfairness in AI.
The widespread deployment of Large Language Models (LLMs) has intensified concerns about subtle social biases embedded in their outputs. Existing guardrails often fail when faced with indirect or contextually complex bias-inducing prompts. To address these limitations, we propose a unified framework for both systematic bias evaluation and targeted mitigation. Our approach introduces six novel Metamorphic Relations (MRs) that, based on metamorphic testing principles, transform direct bias-inducing inputs into semantically equivalent yet adversarially challenging variants. These transformations enable an automated method for exposing hidden model biases: when an LLM responds inconsistently or unfairly across MR-generated variants, the underlying bias becomes detectable. We further show that the same MRs can be used to generate diverse bias-inducing samples for fine-tuning, directly linking the testing process to mitigation. Using six state-of-the-art LLMs - spanning open-source and proprietary models - and a representative subset of 385 questions from the 8,978-item BiasAsker benchmark covering seven protected groups, our MRs reveal up to 14% more hidden biases compared to existing tools. Moreover, fine-tuning with both original and MR-mutated samples significantly enhances bias resiliency, increasing safe response rates from 54.7% to over 88.9% across models. These results highlight metamorphic relations as a practical mechanism for improving fairness in conversational AI.
Similar Papers
Metamorphic Testing for Fairness Evaluation in Large Language Models: Identifying Intersectional Bias in LLaMA and GPT
Computation and Language
Finds unfairness in AI language models.
Metamorphic Testing of Large Language Models for Natural Language Processing
Software Engineering
Finds mistakes in smart computer language.
Efficient Fairness Testing in Large Language Models: Prioritizing Metamorphic Relations for Bias Detection
Computation and Language
Finds unfairness in AI faster.