HanDyVQA: A Video QA Benchmark for Fine-Grained Hand-Object Interaction Dynamics
By: Masatoshi Tateno , Gido Kato , Hirokatsu Kataoka and more
Potential Business Impact:
Helps robots understand how hands move objects.
Hand-object interaction (HOI) inherently involves dynamics where human manipulations produce distinct spatio-temporal effects on objects. However, existing semantic HOI benchmarks focused either on manipulation or on the resulting effects at a coarse level, lacking fine-grained spatio-temporal reasoning to capture the underlying dynamics in HOI. We introduce HanDyVQA, a fine-grained video question-answering benchmark that comprehensively covers both the manipulation and effect aspects of HOI. HanDyVQA comprises six complementary question types (Action, Process, Objects, Location, State Change, and Object Parts), totalling 11.1K multiple-choice QA pairs. Collected QA pairs recognizing manipulation styles, hand/object motions, and part-level state changes. HanDyVQA also includes 10.3K segmentation masks for Objects and Object Parts questions, enabling the evaluation of object/part-level reasoning in video object segmentation. We evaluated recent video foundation models on our benchmark and found that even the best-performing model, Gemini-2.5-Pro, reached only 73% average accuracy, which is far from human performance (97%). Further analysis shows the remaining challenges in spatial relationship, motion, and part-level geometric understanding. We also found that integrating explicit HOI-related cues into visual features improves performance, offering insights for developing future models with a deeper understanding of HOI dynamics.
Similar Papers
Open-world Hand-Object Interaction Video Generation Based on Structure and Contact-aware Representation
CV and Pattern Recognition
Makes videos of hands touching objects realistic.
Rethinking Human-Object Interaction Evaluation for both Vision-Language Models and HOI-Specific Methods
CV and Pattern Recognition
Helps computers understand what people are doing in pictures.
Egocentric Human-Object Interaction Detection: A New Benchmark and Method
CV and Pattern Recognition
Helps robots see what hands are doing.