Score: 1

Caption: Generating Informative Content Labels for Image Buttons Using Next-Screen Context

Published: August 12, 2025 | arXiv ID: 2508.08731v1

By: Mingyuan Zhong, Ajit Mallavarapu, Qing Nie

BigTech Affiliations: University of Washington

Potential Business Impact:

Helps blind people understand phone screens better.

We present Caption, an LLM-powered content label generation tool for visual interactive elements on mobile devices. Content labels are essential for screen readers to provide announcements for image-based elements, but are often missing or uninformative due to developer neglect. Automated captioning systems attempt to address this, but are limited to on-screen context, often resulting in inaccurate or unspecific labels. To generate more accurate and descriptive labels, Caption collects next-screen context on interactive elements by navigating to the destination screen that appears after an interaction and incorporating information from both the origin and destination screens. Preliminary results show Caption generates more accurate labels than both human annotators and an LLM baseline. We expect Caption to empower developers by providing actionable accessibility suggestions and directly support on-demand repairs by screen reader users.

Country of Origin
🇺🇸 United States

Page Count
3 pages

Category
Computer Science:
Human-Computer Interaction