Less Redundancy: Boosting Practicality of Vision Language Model in Walking Assistants
By: Chongyang Li , Yuan Zhiqiang , Jiapei Zhang and more
Potential Business Impact:
Helps blind people navigate safely with fewer reminders.
Approximately 283 million people worldwide live with visual impairments, motivating increasing research into leveraging Visual Language Models (VLMs) to develop effective walking assistance systems for blind and low vision individuals. However, existing VLMs in walking assistant task often have outputs that contain considerable redundancy and extraneous details, adversely affecting users' ability to accurately assess their surroundings. Moreover, these models typically lack the capability to proactively assess environmental risks and adaptively trigger reminders based on the appropriate scene, leading to excessive temporal redundancy. To mitigate output and temporal redundancy, we propose WalkVLM-LR, a walking assistance model with less redundancy. To reduce output redundancy, we introduce four human-preference-based custom reward functions within the GRPO-based reasoning framework to optimize the output in terms of conciseness, fluency, keyword density, and accuracy, thereby producing more informative and streamlined outputs. To minimize temporal redundancy, we incorporate an environment awareness discriminator, which shares the visual encoder with the VLMs to reduce redundant computations and enhance discriminative efficiency, to make WalkVLM-LR assess scene risk levels and minimize unnecessary reminders. Experimental results demonstrate that our method achieves state-of-the-art performance across all evaluation metrics compared with other models, particularly in output conciseness and less temporal redundancy.
Similar Papers
Less Redundancy: Boosting Practicality of Vision Language Model in Walking Assistants
Computation and Language
Helps blind people navigate safely with fewer reminders.
Toward More Reliable Artificial Intelligence: Reducing Hallucinations in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
ExploreVLM: Closed-Loop Robot Exploration Task Planning with Vision-Language Models
Robotics
Robots learn to explore and do tasks better.