Bita: A Conversational Assistant for Fairness Testing
By: Keeryn Johnson, Cleyton Magalhaes, Ronnie de Souza Santos
Bias in AI systems can lead to unfair and discriminatory outcomes, especially when left untested before deployment. Although fairness testing aims to identify and mitigate such bias, existing tools are often difficult to use, requiring advanced expertise and offering limited support for real-world workflows. To address this, we introduce Bita, a conversational assistant designed to help software testers detect potential sources of bias, evaluate test plans through a fairness lens, and generate fairness-oriented exploratory testing charters. Bita integrates a large language model with retrieval-augmented generation, grounding its responses in curated fairness literature. Our validation demonstrates how Bita supports fairness testing tasks on real-world AI systems, providing structured, reproducible evidence of its utility. In summary, our work contributes a practical tool that operationalizes fairness testing in a way that is accessible, systematic, and directly applicable to industrial practice.
Similar Papers
Beyond Internal Data: Constructing Complete Datasets for Fairness Testing
Machine Learning (CS)
Tests AI for fairness without private data.
Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages
Computation and Language
Finds and fixes unfairness in AI language.
How AI Fails: An Interactive Pedagogical Tool for Demonstrating Dialectal Bias in Automated Toxicity Models
Computation and Language
AI unfairly flags Black people's words as bad.