MMHOI: Modeling Complex 3D Multi-Human Multi-Object Interactions
By: Kaen Kogashi, Anoop Cherian, Meng-Yu Jennifer Kuo
Potential Business Impact:
Helps computers understand how people use things.
Real-world scenes often feature multiple humans interacting with multiple objects in ways that are causal, goal-oriented, or cooperative. Yet existing 3D human-object interaction (HOI) benchmarks consider only a fraction of these complex interactions. To close this gap, we present MMHOI -- a large-scale, Multi-human Multi-object Interaction dataset consisting of images from 12 everyday scenarios. MMHOI offers complete 3D shape and pose annotations for every person and object, along with labels for 78 action categories and 14 interaction-specific body parts, providing a comprehensive testbed for next-generation HOI research. Building on MMHOI, we present MMHOI-Net, an end-to-end transformer-based neural network for jointly estimating human-object 3D geometries, their interactions, and associated actions. A key innovation in our framework is a structured dual-patch representation for modeling objects and their interactions, combined with action recognition to enhance the interaction prediction. Experiments on MMHOI and the recently proposed CORE4D datasets demonstrate that our approach achieves state-of-the-art performance in multi-HOI modeling, excelling in both accuracy and reconstruction quality.
Similar Papers
Learning to Generate Human-Human-Object Interactions from Textual Descriptions
CV and Pattern Recognition
Teaches computers to show people interacting with objects.
InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation
CV and Pattern Recognition
Makes robots better at picking up and using things.
Efficient and Scalable Monocular Human-Object Interaction Motion Reconstruction
CV and Pattern Recognition
Robots learn to copy human actions from videos.