Score: 0

"Over-the-Hood" AI Inclusivity Bugs and How 3 AI Product Teams Found and Fixed Them

Published: October 21, 2025 | arXiv ID: 2510.19033v1

By: Andrew Anderson , Fatima A. Moussaoui , Jimena Noa Guevara and more

Potential Business Impact:

Fixes AI that unfairly blocks some users.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

While much research has shown the presence of AI's "under-the-hood" biases (e.g., algorithmic, training data, etc.), what about "over-the-hood" inclusivity biases: barriers in user-facing AI products that disproportionately exclude users with certain problem-solving approaches? Recent research has begun to report the existence of such biases -- but what do they look like, how prevalent are they, and how can developers find and fix them? To find out, we conducted a field study with 3 AI product teams, to investigate what kinds of AI inclusivity bugs exist uniquely in user-facing AI products, and whether/how AI product teams might harness an existing (non-AI-oriented) inclusive design method to find and fix them. The teams' work resulted in identifying 6 types of AI inclusivity bugs arising 83 times, fixes covering 47 of these bug instances, and a new variation of the GenderMag inclusive design method, GenderMag-for-AI, that is especially effective at detecting certain kinds of AI inclusivity bugs.

Country of Origin
🇺🇸 United States

Page Count
27 pages

Category
Computer Science:
Human-Computer Interaction