Seeing is Believing (and Predicting): Context-Aware Multi-Human Behavior Prediction with Vision Language Models
By: Utsav Panchal , Yuchen Liu , Luigi Palmieri and more
Accurately predicting human behaviors is crucial for mobile robots operating in human-populated environments. While prior research primarily focuses on predicting actions in single-human scenarios from an egocentric view, several robotic applications require understanding multiple human behaviors from a third-person perspective. To this end, we present CAMP-VLM (Context-Aware Multi-human behavior Prediction): a Vision Language Model (VLM)-based framework that incorporates contextual features from visual input and spatial awareness from scene graphs to enhance prediction of humans-scene interactions. Due to the lack of suitable datasets for multi-human behavior prediction from an observer view, we perform fine-tuning of CAMP-VLM with synthetic human behavior data generated by a photorealistic simulator, and evaluate the resulting models on both synthetic and real-world sequences to assess their generalization capabilities. Leveraging Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), CAMP-VLM outperforms the best-performing baseline by up to 66.9% in prediction accuracy.
Similar Papers
VisualActBench: Can VLMs See and Act like a Human?
CV and Pattern Recognition
Teaches computers to act smartly by just watching.
Context-Aware Human Behavior Prediction Using Multimodal Large Language Models: Challenges and Insights
Robotics
Helps robots understand what people will do.
Not There Yet: Evaluating Vision Language Models in Simulating the Visual Perception of People with Low Vision
CV and Pattern Recognition
Helps computers understand how people with poor vision see.