"Less is More": Reducing Cognitive Load and Task Drift in Real-Time Multimodal Assistive Agents for the Visually Impaired
By: Yi Zhao , Siqi Wang , Qiqun Geng and more
Potential Business Impact:
Helps blind people use phones with less thinking.
Vision-Language Models (VLMs) enable on-demand visual assistance, yet current applications for people with visual impairments (PVI) impose high cognitive load and exhibit task drift, limiting real-world utility. We first conducted a formative study with 15 PVI and identified three requirements for visually impaired assistance (VIA): low latency for real-time use, minimal cognitive load, and hallucination-resistant responses to sustain trust. Informed by the formative study, we present VIA-Agent, a prototype that co-optimizes its cognitive 'brain' and interactive 'body'. The brain implements a goal-persistent design with calibrated conciseness to produce brief, actionable guidance; the body adopts a real-time communication (RTC) embodiment-evolving from a request-response model Context Protocol (MCP) pipeline-to-support fluid interaction. We evaluated VIA-Agent with 9 PVI across navigation and object retrieval in the wild against BeMyAI and Doubao. VIA-Agent significantly outperformed BeMyAI both quantitatively and qualitatively. While achieving success rates comparable to Doubao, it reduced mean task time by 39.9% (70.1 s vs. 110.7 s), required fewer conversational turns (4.3 vs. 5.0), and lowered perceived cognitive load and task drift. System Usability Scale (SUS) results aligned with these findings, with VIA-Agent achieving the highest usability. We hope this work inspires the development of more human-centered VIA systems.
Similar Papers
Enhancing Agentic Autonomous Scientific Discovery with Vision-Language Model Capabilities
Computation and Language
Computers discover science by checking their own work.
Less Redundancy: Boosting Practicality of Vision Language Model in Walking Assistants
Computation and Language
Helps blind people navigate safely with fewer reminders.
Less Redundancy: Boosting Practicality of Vision Language Model in Walking Assistants
Computation and Language
Helps blind people navigate safely with fewer reminders.