BIASINSPECTOR: Detecting Bias in Structured Data through LLM Agents
By: Haoxuan Li , Mingyu Derek Ma , Jen-tse Huang and more
Potential Business Impact:
Finds unfairness in computer data automatically.
Detecting biases in structured data is a complex and time-consuming task. Existing automated techniques are limited in diversity of data types and heavily reliant on human case-by-case handling, resulting in a lack of generalizability. Currently, large language model (LLM)-based agents have made significant progress in data science, but their ability to detect data biases is still insufficiently explored. To address this gap, we introduce the first end-to-end, multi-agent synergy framework, BIASINSPECTOR, designed for automatic bias detection in structured data based on specific user requirements. It first develops a multi-stage plan to analyze user-specified bias detection tasks and then implements it with a diverse and well-suited set of tools. It delivers detailed results that include explanations and visualizations. To address the lack of a standardized framework for evaluating the capability of LLM agents to detect biases in data, we further propose a comprehensive benchmark that includes multiple evaluation metrics and a large set of test cases. Extensive experiments demonstrate that our framework achieves exceptional overall performance in structured data bias detection, setting a new milestone for fairer data applications.
Similar Papers
Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data
Computation and Language
Finds and fixes unfairness in AI writing.
Fine-Grained Bias Detection in LLM: Enhancing detection mechanisms for nuanced biases
Computation and Language
Finds hidden unfairness in AI language.
No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language Models
Computation and Language
Finds and fixes unfairness in AI language.