VHOI: Controllable Video Generation of Human-Object Interactions from Sparse Trajectories via Motion Densification
By: Wanyue Zhang , Lin Geng Foo , Thabo Beeler and more
Potential Business Impact:
Makes videos of people interacting with objects.
Synthesizing realistic human-object interactions (HOI) in video is challenging due to the complex, instance-specific interaction dynamics of both humans and objects. Incorporating controllability in video generation further adds to the complexity. Existing controllable video generation approaches face a trade-off: sparse controls like keypoint trajectories are easy to specify but lack instance-awareness, while dense signals such as optical flow, depths or 3D meshes are informative but costly to obtain. We propose VHOI, a two-stage framework that first densifies sparse trajectories into HOI mask sequences, and then fine-tunes a video diffusion model conditioned on these dense masks. We introduce a novel HOI-aware motion representation that uses color encodings to distinguish not only human and object motion, but also body-part-specific dynamics. This design incorporates a human prior into the conditioning signal and strengthens the model's ability to understand and generate realistic HOI dynamics. Experiments demonstrate state-of-the-art results in controllable HOI video generation. VHOI is not limited to interaction-only scenarios and can also generate full human navigation leading up to object interactions in an end-to-end manner. Project page: https://vcai.mpi-inf.mpg.de/projects/vhoi/.
Similar Papers
Spatial-Temporal Human-Object Interaction Detection
CV and Pattern Recognition
Helps computers understand what people do in videos.
Efficient and Scalable Monocular Human-Object Interaction Motion Reconstruction
CV and Pattern Recognition
Robots learn to copy human actions from videos.
Open-world Hand-Object Interaction Video Generation Based on Structure and Contact-aware Representation
CV and Pattern Recognition
Makes videos of hands touching objects realistic.