GenHOI: Generalizing Text-driven 4D Human-Object Interaction Synthesis for Unseen Objects
By: Shujia Li , Haiyu Zhang , Xinyuan Chen and more
Potential Business Impact:
Creates realistic human-object actions for computers.
While diffusion models and large-scale motion datasets have advanced text-driven human motion synthesis, extending these advances to 4D human-object interaction (HOI) remains challenging, mainly due to the limited availability of large-scale 4D HOI datasets. In our study, we introduce GenHOI, a novel two-stage framework aimed at achieving two key objectives: 1) generalization to unseen objects and 2) the synthesis of high-fidelity 4D HOI sequences. In the initial stage of our framework, we employ an Object-AnchorNet to reconstruct sparse 3D HOI keyframes for unseen objects, learning solely from 3D HOI datasets, thereby mitigating the dependence on large-scale 4D HOI datasets. Subsequently, we introduce a Contact-Aware Diffusion Model (ContactDM) in the second stage to seamlessly interpolate sparse 3D HOI keyframes into densely temporally coherent 4D HOI sequences. To enhance the quality of generated 4D HOI sequences, we propose a novel Contact-Aware Encoder within ContactDM to extract human-object contact patterns and a novel Contact-Aware HOI Attention to effectively integrate the contact signals into diffusion models. Experimental results show that we achieve state-of-the-art results on the publicly available OMOMO and 3D-FUTURE datasets, demonstrating strong generalization abilities to unseen objects, while enabling high-fidelity 4D HOI generation.
Similar Papers
Zero-Shot Human-Object Interaction Synthesis with Multimodal Priors
Graphics
Creates realistic 3D actions from text descriptions.
AnchorHOI: Zero-shot Generation of 4D Human-Object Interaction via Anchor-based Prior Distillation
CV and Pattern Recognition
Makes computers create realistic human-object videos.
Efficient and Scalable Monocular Human-Object Interaction Motion Reconstruction
CV and Pattern Recognition
Robots learn to copy human actions from videos.