Decoupled Generative Modeling for Human-Object Interaction Synthesis
By: Hwanhee Jung , Seunggwan Lee , Jeongyoon Yoon and more
Synthesizing realistic human-object interaction (HOI) is essential for 3D computer vision and robotics, underpinning animation and embodied control. Existing approaches often require manually specified intermediate waypoints and place all optimization objectives on a single network, which increases complexity, reduces flexibility, and leads to errors such as unsynchronized human and object motion or penetration. To address these issues, we propose Decoupled Generative Modeling for Human-Object Interaction Synthesis (DecHOI), which separates path planning and action synthesis. A trajectory generator first produces human and object trajectories without prescribed waypoints, and an action generator conditions on these paths to synthesize detailed motions. To further improve contact realism, we employ adversarial training with a discriminator that focuses on the dynamics of distal joints. The framework also models a moving counterpart and supports responsive, long-sequence planning in dynamic scenes, while preserving plan consistency. Across two benchmarks, FullBodyManipulation and 3D-FUTURE, DecHOI surpasses prior methods on most quantitative metrics and qualitative evaluations, and perceptual studies likewise prefer our results.
Similar Papers
Generative Human-Object Interaction Detection via Differentiable Cognitive Steering of Multi-modal LLMs
CV and Pattern Recognition
Lets computers understand any action between people and things.
Learning to Generate Human-Human-Object Interactions from Textual Descriptions
CV and Pattern Recognition
Teaches computers to show people interacting with objects.
Zero-Shot Human-Object Interaction Synthesis with Multimodal Priors
Graphics
Creates realistic 3D actions from text descriptions.