InterPose: Learning to Generate Human-Object Interactions from Large-Scale Web Videos
By: Yangsong Zhang , Abdul Ahad Butt , Gül Varol and more
Potential Business Impact:
Makes computer characters interact with objects realistically.
Human motion generation has shown great advances thanks to the recent diffusion models trained on large-scale motion capture data. Most of existing works, however, currently target animation of isolated people in empty scenes. Meanwhile, synthesizing realistic human-object interactions in complex 3D scenes remains a critical challenge in computer graphics and robotics. One obstacle towards generating versatile high-fidelity human-object interactions is the lack of large-scale datasets with diverse object manipulations. Indeed, existing motion capture data is typically restricted to single people and manipulations of limited sets of objects. To address this issue, we propose an automatic motion extraction pipeline and use it to collect interaction-rich human motions. Our new dataset InterPose contains 73.8K sequences of 3D human motions and corresponding text captions automatically obtained from 45.8K videos with human-object interactions. We perform extensive experiments and demonstrate InterPose to bring significant improvements to state-of-the-art methods for human motion generation. Moreover, using InterPose we develop an LLM-based agent enabling zero-shot animation of people interacting with diverse objects and scenes.
Similar Papers
InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation
CV and Pattern Recognition
Makes robots better at picking up and using things.
Synthetic Human Action Video Data Generation with Pose Transfer
CV and Pattern Recognition
Makes fake videos of people move realistically.
Efficient and Scalable Monocular Human-Object Interaction Motion Reconstruction
CV and Pattern Recognition
Robots learn to copy human actions from videos.