Caption: Generating Informative Content Labels for Image Buttons Using Next-Screen Context
By: Mingyuan Zhong, Ajit Mallavarapu, Qing Nie
Potential Business Impact:
Helps blind people understand phone screens better.
We present Caption, an LLM-powered content label generation tool for visual interactive elements on mobile devices. Content labels are essential for screen readers to provide announcements for image-based elements, but are often missing or uninformative due to developer neglect. Automated captioning systems attempt to address this, but are limited to on-screen context, often resulting in inaccurate or unspecific labels. To generate more accurate and descriptive labels, Caption collects next-screen context on interactive elements by navigating to the destination screen that appears after an interaction and incorporating information from both the origin and destination screens. Preliminary results show Caption generates more accurate labels than both human annotators and an LLM baseline. We expect Caption to empower developers by providing actionable accessibility suggestions and directly support on-demand repairs by screen reader users.
Similar Papers
Automated Generation of Accurate Privacy Captions From Android Source Code Using Large Language Models
Cryptography and Security
App tells you what personal info it uses.
Leveraging Author-Specific Context for Scientific Figure Caption Generation: 3rd SciCap Challenge
Computation and Language
Writes better picture descriptions for science papers.
Multi-LLM Collaborative Caption Generation in Scientific Documents
Computation and Language
Makes computer pictures tell better stories.