Score: 2

Bias Testing and Mitigation in Black Box LLMs using Metamorphic Relations

Published: November 29, 2025 | arXiv ID: 2512.00556v1

By: Sina Salimian , Gias Uddin , Sumon Biswas and more

Potential Business Impact:

Finds and fixes hidden unfairness in AI.

Business Areas:
A/B Testing Data and Analytics

The widespread deployment of Large Language Models (LLMs) has intensified concerns about subtle social biases embedded in their outputs. Existing guardrails often fail when faced with indirect or contextually complex bias-inducing prompts. To address these limitations, we propose a unified framework for both systematic bias evaluation and targeted mitigation. Our approach introduces six novel Metamorphic Relations (MRs) that, based on metamorphic testing principles, transform direct bias-inducing inputs into semantically equivalent yet adversarially challenging variants. These transformations enable an automated method for exposing hidden model biases: when an LLM responds inconsistently or unfairly across MR-generated variants, the underlying bias becomes detectable. We further show that the same MRs can be used to generate diverse bias-inducing samples for fine-tuning, directly linking the testing process to mitigation. Using six state-of-the-art LLMs - spanning open-source and proprietary models - and a representative subset of 385 questions from the 8,978-item BiasAsker benchmark covering seven protected groups, our MRs reveal up to 14% more hidden biases compared to existing tools. Moreover, fine-tuning with both original and MR-mutated samples significantly enhances bias resiliency, increasing safe response rates from 54.7% to over 88.9% across models. These results highlight metamorphic relations as a practical mechanism for improving fairness in conversational AI.

Country of Origin
πŸ‡¨πŸ‡¦ πŸ‡ΊπŸ‡Έ Canada, United States

Page Count
22 pages

Category
Computer Science:
Software Engineering