Reconstructing In-the-Wild Open-Vocabulary Human-Object Interactions
By: Boran Wen , Dingbang Huang , Zichen Zhang and more
Potential Business Impact:
Lets computers understand how people use things.
Reconstructing human-object interactions (HOI) from single images is fundamental in computer vision. Existing methods are primarily trained and tested on indoor scenes due to the lack of 3D data, particularly constrained by the object variety, making it challenging to generalize to real-world scenes with a wide range of objects. The limitations of previous 3D HOI datasets were primarily due to the difficulty in acquiring 3D object assets. However, with the development of 3D reconstruction from single images, recently it has become possible to reconstruct various objects from 2D HOI images. We therefore propose a pipeline for annotating fine-grained 3D humans, objects, and their interactions from single images. We annotated 2.5k+ 3D HOI assets from existing 2D HOI datasets and built the first open-vocabulary in-the-wild 3D HOI dataset Open3DHOI, to serve as a future test set. Moreover, we design a novel Gaussian-HOI optimizer, which efficiently reconstructs the spatial interactions between humans and objects while learning the contact regions. Besides the 3D HOI reconstruction, we also propose several new tasks for 3D HOI understanding to pave the way for future work. Data and code will be publicly available at https://wenboran2002.github.io/3dhoi.
Similar Papers
Zero-Shot Human-Object Interaction Synthesis with Multimodal Priors
Graphics
Creates realistic 3D actions from text descriptions.
Efficient and Scalable Monocular Human-Object Interaction Motion Reconstruction
CV and Pattern Recognition
Robots learn to copy human actions from videos.
Learning to Generate Human-Human-Object Interactions from Textual Descriptions
CV and Pattern Recognition
Teaches computers to show people interacting with objects.