Playing telephone with generative models: "verification disability," "compelled reliance," and accessibility in data visualization
By: Frank Elavsky, Cindy Xiong Bearfield
Potential Business Impact:
AI can't be trusted to describe pictures for blind people.
This paper is a collaborative piece between two worlds of expertise in the field of data visualization: accessibility and bias. In particular, the rise of generative models playing a role in accessibility is a worrying trend for data visualization. These models are increasingly used to help author visualizations as well as generate descriptions of existing visualizations for people who are blind, low vision, or use assistive technologies such as screen readers. Sighted human-to-human bias has already been established as an area of concern for theory, research, and design in data visualization. But what happens when someone is unable to verify the model output or adequately interrogate algorithmic bias, such as a context where a blind person asks a model to describe a chart for them? In such scenarios, trust from the user is not earned, rather reliance is compelled by the model-to-human relationship. In this work, we explored the dangers of AI-generated descriptions for accessibility, playing a game of telephone between models, observing bias production in model interpretation, and re-interpretation of a data visualization. We unpack ways that model failure in visualization is especially problematic for users with visual impairments, and suggest directions forward for three distinct readers of this piece: technologists who build model-assisted interfaces for end users, users with disabilities leveraging models for their own purposes, and researchers concerned with bias, accessibility, or visualization.
Similar Papers
Creation, Critique, and Consumption: Exploring Generative AI Descriptions for Supporting Blind and Low Vision Professionals with Visual Tasks
Human-Computer Interaction
Helps blind people do jobs with pictures.
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
Artificial Intelligence
Makes AI explanations understandable for blind people.
Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Human-Computer Interaction
Makes AI fairer by showing its hidden bias.