Using Vision-Language Models as Proxies for Social Intelligence in Human-Robot Interaction
By: Fanjun Bu , Melina Tsai , Audrey Tjokro and more
Potential Business Impact:
Helps robots understand when people want to talk.
Robots operating in everyday environments must often decide when and whether to engage with people, yet such decisions often hinge on subtle nonverbal cues that unfold over time and are difficult to model explicitly. Drawing on a five-day Wizard-of-Oz deployment of a mobile service robot in a university cafe, we analyze how people signal interaction readiness through nonverbal behaviors and how expert wizards use these cues to guide engagement. Motivated by these observations, we propose a two-stage pipeline in which lightweight perceptual detectors (gaze shifts and proxemics) are used to selectively trigger heavier video-based vision-language model (VLM) queries at socially meaningful moments. We evaluate this pipeline on replayed field interactions and compare two prompting strategies. Our findings suggest that selectively using VLMs as proxies for social reasoning enables socially responsive robot behavior, allowing robots to act appropriately by attending to the cues people naturally provide in real-world interactions.
Similar Papers
Agreeing to Interact in Human-Robot Interaction using Large Language Models and Vision Language Models
Human-Computer Interaction
Helps robots know when to start talking to people.
Utilizing Vision-Language Models as Action Models for Intent Recognition and Assistance
Robotics
Robot understands what you want and helps you.
ExploreVLM: Closed-Loop Robot Exploration Task Planning with Vision-Language Models
Robotics
Robots learn to explore and do tasks better.