LISN: Language-Instructed Social Navigation with VLM-based Controller Modulating
By: Junting Chen , Yunchuan Li , Panfeng Jiang and more
Towards human-robot coexistence, socially aware navigation is significant for mobile robots. Yet existing studies on this area focus mainly on path efficiency and pedestrian collision avoidance, which are essential but represent only a fraction of social navigation. Beyond these basics, robots must also comply with user instructions, aligning their actions to task goals and social norms expressed by humans. In this work, we present LISN-Bench, the first simulation-based benchmark for language-instructed social navigation. Built on Rosnav-Arena 3.0, it is the first standardized social navigation benchmark to incorporate instruction following and scene understanding across diverse contexts. To address this task, we further propose Social-Nav-Modulator, a fast-slow hierarchical system where a VLM agent modulates costmaps and controller parameters. Decoupling low-level action generation from the slower VLM loop reduces reliance on high-frequency VLM inference while improving dynamic avoidance and perception adaptability. Our method achieves an average success rate of 91.3%, which is greater than 63% than the most competitive baseline, with most of the improvements observed in challenging tasks such as following a person in a crowd and navigating while strictly avoiding instruction-forbidden regions. The project website is at: https://social-nav.github.io/LISN-project/
Similar Papers
SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation
Robotics
Helps robots understand people for safer walking.
Breaking Down and Building Up: Mixture of Skill-Based Vision-and-Language Navigation Agents
Artificial Intelligence
Helps robots follow directions in new places.
A Navigation Framework Utilizing Vision-Language Models
Robotics
Helps robots follow spoken directions in new places.