Seeing is Believing (and Predicting): Context-Aware Multi-Human Behavior Prediction with Vision Language Models
By: Utsav Panchal , Yuchen Liu , Luigi Palmieri and more
Potential Business Impact:
Helps robots understand what many people will do.
Accurately predicting human behaviors is crucial for mobile robots operating in human-populated environments. While prior research primarily focuses on predicting actions in single-human scenarios from an egocentric view, several robotic applications require understanding multiple human behaviors from a third-person perspective. To this end, we present CAMP-VLM (Context-Aware Multi-human behavior Prediction): a Vision Language Model (VLM)-based framework that incorporates contextual features from visual input and spatial awareness from scene graphs to enhance prediction of humans-scene interactions. Due to the lack of suitable datasets for multi-human behavior prediction from an observer view, we perform fine-tuning of CAMP-VLM with synthetic human behavior data generated by a photorealistic simulator, and evaluate the resulting models on both synthetic and real-world sequences to assess their generalization capabilities. Leveraging Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO), CAMP-VLM outperforms the best-performing baseline by up to 66.9% in prediction accuracy.
Similar Papers
VisualActBench: Can VLMs See and Act like a Human?
CV and Pattern Recognition
Teaches computers to act smartly by just watching.
LVLM-Aided Alignment of Task-Specific Vision Models
CV and Pattern Recognition
Makes AI models understand things like people do.
Can Vision-Language Models Understand Construction Workers? An Exploratory Study
CV and Pattern Recognition
Computers see if workers are happy or busy.