Score: 1

BIASINSPECTOR: Detecting Bias in Structured Data through LLM Agents

Published: April 7, 2025 | arXiv ID: 2504.04855v1

By: Haoxuan Li , Mingyu Derek Ma , Jen-tse Huang and more

Potential Business Impact:

Finds unfairness in computer data automatically.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Detecting biases in structured data is a complex and time-consuming task. Existing automated techniques are limited in diversity of data types and heavily reliant on human case-by-case handling, resulting in a lack of generalizability. Currently, large language model (LLM)-based agents have made significant progress in data science, but their ability to detect data biases is still insufficiently explored. To address this gap, we introduce the first end-to-end, multi-agent synergy framework, BIASINSPECTOR, designed for automatic bias detection in structured data based on specific user requirements. It first develops a multi-stage plan to analyze user-specified bias detection tasks and then implements it with a diverse and well-suited set of tools. It delivers detailed results that include explanations and visualizations. To address the lack of a standardized framework for evaluating the capability of LLM agents to detect biases in data, we further propose a comprehensive benchmark that includes multiple evaluation metrics and a large set of test cases. Extensive experiments demonstrate that our framework achieves exceptional overall performance in structured data bias detection, setting a new milestone for fairer data applications.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Artificial Intelligence