Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
By: Maria J. P. Peixoto , Akriti Pandey , Ahsan Zaman and more
Potential Business Impact:
Makes AI explanations understandable for blind people.
As AI systems are increasingly deployed to support decision-making in critical domains, explainability has become a means to enhance the understandability of these outputs and enable users to make more informed and conscious choices. However, despite growing interest in the usability of eXplainable AI (XAI), the accessibility of these methods, particularly for users with vision impairments, remains underexplored. This paper investigates accessibility gaps in XAI through a two-pronged approach. First, a literature review of 79 studies reveals that evaluations of XAI techniques rarely include disabled users, with most explanations relying on inherently visual formats. Second, we present a four-part methodological proof of concept that operationalizes inclusive XAI design: (1) categorization of AI systems, (2) persona definition and contextualization, (3) prototype design and implementation, and (4) expert and user assessment of XAI techniques for accessibility. Preliminary findings suggest that simplified explanations are more comprehensible for non-visual users than detailed ones, and that multimodal presentation is required for more equitable interpretability.
Similar Papers
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
Artificial Intelligence
Helps people understand how smart computers make choices.
Beyond Technocratic XAI: The Who, What & How in Explanation Design
Computers and Society
Helps people understand how smart computers make decisions.
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.