Score: 0

Fine-Tuning Vision-Language Models for Visual Navigation Assistance

Published: September 9, 2025 | arXiv ID: 2509.07488v1

By: Xiao Li , Bharat Gandhi , Ming Zhan and more

Potential Business Impact:

Helps blind people navigate indoors with voice.

Business Areas:
Navigation Navigation and Mapping

We address vision-language-driven indoor navigation to assist visually impaired individuals in reaching a target location using images and natural language guidance. Traditional navigation systems are ineffective indoors due to the lack of precise location data. Our approach integrates vision and language models to generate step-by-step navigational instructions, enhancing accessibility and independence. We fine-tune the BLIP-2 model with Low Rank Adaptation (LoRA) on a manually annotated indoor navigation dataset. We propose an evaluation metric that refines the BERT F1 score by emphasizing directional and sequential variables, providing a more comprehensive measure of navigational performance. After applying LoRA, the model significantly improved in generating directional instructions, overcoming limitations in the original BLIP-2 model.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition