Bias Mitigation Agent: Optimizing Source Selection for Fair and Balanced Knowledge Retrieval
By: Karanbir Singh, Deepak Muppiri, William Ngu
Potential Business Impact:
Cleans AI answers to be fair and true.
Large Language Models (LLMs) have transformed the field of artificial intelligence by unlocking the era of generative applications. Built on top of generative AI capabilities, Agentic AI represents a major shift toward autonomous, goal-driven systems that can reason, retrieve, and act. However, they also inherit the bias present in both internal and external information sources. This significantly affects the fairness and balance of retrieved information, and hence reduces user trust. To address this critical challenge, we introduce a novel Bias Mitigation Agent, a multi-agent system designed to orchestrate the workflow of bias mitigation through specialized agents that optimize the selection of sources to ensure that the retrieved content is both highly relevant and minimally biased to promote fair and balanced knowledge dissemination. The experimental results demonstrate an 81.82\% reduction in bias compared to a baseline naive retrieval strategy.
Similar Papers
Toward Verifiable Misinformation Detection: A Multi-Tool LLM Agent Framework
Artificial Intelligence
Finds fake news by checking facts online.
Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data
Computation and Language
Finds and fixes unfairness in AI writing.
Emergence: Overcoming Privileged Information Bias in Asymmetric Embodied Agents via Active Querying
Artificial Intelligence
Teaches robots to ask questions for better teamwork.