LLM-Guided Synthetic Augmentation (LGSA) for Mitigating Bias in AI Systems
By: Sai Suhruth Reddy Karri , Yashwanth Sai Nallapuneni , Laxmi Narasimha Reddy Mallireddy and more
Potential Business Impact:
Makes AI fairer by teaching it about everyone.
Bias in AI systems, especially those relying on natural language data, raises ethical and practical concerns. Underrepresentation of certain groups often leads to uneven performance across demographics. Traditional fairness methods, such as pre-processing, in-processing, and post-processing, depend on protected-attribute labels, involve accuracy-fairness trade-offs, and may not generalize across datasets. To address these challenges, we propose LLM-Guided Synthetic Augmentation (LGSA), which uses large language models to generate counterfactual examples for underrepresented groups while preserving label integrity. We evaluated LGSA on a controlled dataset of short English sentences with gendered pronouns, professions, and binary classification labels. Structured prompts were used to produce gender-swapped paraphrases, followed by quality control including semantic similarity checks, attribute verification, toxicity screening, and human spot checks. The augmented dataset expanded training coverage and was used to train a classifier under consistent conditions. Results show that LGSA reduces performance disparities without compromising accuracy. The baseline model achieved 96.7 percent accuracy with a 7.2 percent gender bias gap. Simple swap augmentation reduced the gap to 0.7 percent but lowered accuracy to 95.6 percent. LGSA achieved 99.1 percent accuracy with a 1.9 percent bias gap, improving performance on female-labeled examples. These findings demonstrate that LGSA is an effective strategy for bias mitigation, enhancing subgroup balance while maintaining high task accuracy and label fidelity.
Similar Papers
Benchmarking Educational LLMs with Analytics: A Case Study on Gender Bias in Feedback
Computation and Language
Finds unfairness in AI teacher feedback.
Detecting and Mitigating Bias in LLMs through Knowledge Graph-Augmented Training
Computation and Language
Makes smart computer programs fairer and less biased.
Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment
Artificial Intelligence
Removes gender bias from hiring AI.