"Over-the-Hood" AI Inclusivity Bugs and How 3 AI Product Teams Found and Fixed Them
By: Andrew Anderson , Fatima A. Moussaoui , Jimena Noa Guevara and more
Potential Business Impact:
Fixes AI that unfairly blocks some users.
While much research has shown the presence of AI's "under-the-hood" biases (e.g., algorithmic, training data, etc.), what about "over-the-hood" inclusivity biases: barriers in user-facing AI products that disproportionately exclude users with certain problem-solving approaches? Recent research has begun to report the existence of such biases -- but what do they look like, how prevalent are they, and how can developers find and fix them? To find out, we conducted a field study with 3 AI product teams, to investigate what kinds of AI inclusivity bugs exist uniquely in user-facing AI products, and whether/how AI product teams might harness an existing (non-AI-oriented) inclusive design method to find and fix them. The teams' work resulted in identifying 6 types of AI inclusivity bugs arising 83 times, fixes covering 47 of these bug instances, and a new variation of the GenderMag inclusive design method, GenderMag-for-AI, that is especially effective at detecting certain kinds of AI inclusivity bugs.
Similar Papers
"Accessibility people, you go work on that thing of yours over there": Addressing Disability Inclusion in AI Product Organizations
Human-Computer Interaction
Helps AI makers build fair tools for everyone.
Data-Driven and Participatory Approaches toward Neuro-Inclusive AI
Human-Computer Interaction
Makes AI understand and include autistic people better.
Bug Detective and Quality Coach: Developers' Mental Models of AI-Assisted IDE Tools
Software Engineering
Helps coders find bugs and improve code.