Who Gets Left Behind? Auditing Disability Inclusivity in Large Language Models
By: Deepika Dash , Yeshil Bangera , Mithil Bangera and more
Potential Business Impact:
Helps AI give better advice to all people.
Large Language Models (LLMs) are increasingly used for accessibility guidance, yet many disability groups remain underserved by their advice. To address this gap, we present taxonomy aligned benchmark1 of human validated, general purpose accessibility questions, designed to systematically audit inclusivity across disabilities. Our benchmark evaluates models along three dimensions: Question-Level Coverage (breadth within answers), Disability-Level Coverage (balance across nine disability categories), and Depth (specificity of support). Applying this framework to 17 proprietary and open-weight models reveals persistent inclusivity gaps: Vision, Hearing, and Mobility are frequently addressed, while Speech, Genetic/Developmental, Sensory-Cognitive, and Mental Health remain under served. Depth is similarly concentrated in a few categories but sparse elsewhere. These findings reveal who gets left behind in current LLM accessibility guidance and highlight actionable levers: taxonomy-aware prompting/training and evaluations that jointly audit breadth, balance, and depth.
Similar Papers
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Computation and Language
Computers guess wrong about people with disabilities.
Adaptive Generation of Bias-Eliciting Questions for LLMs
Computers and Society
Finds unfairness in AI answers to real questions.
Evolutionary perspective of large language models on shaping research insights into healthcare disparities
Computers and Society
Helps understand health problems for all people.