REACT3D: Recovering Articulations for Interactive Physical 3D Scenes
By: Zhao Huang , Boyang Sun , Alexandros Delitzas and more
Potential Business Impact:
Makes static 3D scenes move and interact.
Interactive 3D scenes are increasingly vital for embodied intelligence, yet existing datasets remain limited due to the labor-intensive process of annotating part segmentation, kinematic types, and motion trajectories. We present REACT3D, a scalable zero-shot framework that converts static 3D scenes into simulation-ready interactive replicas with consistent geometry, enabling direct use in diverse downstream tasks. Our contributions include: (i) openable-object detection and segmentation to extract candidate movable parts from static scenes, (ii) articulation estimation that infers joint types and motion parameters, (iii) hidden-geometry completion followed by interactive object assembly, and (iv) interactive scene integration in widely supported formats to ensure compatibility with standard simulation platforms. We achieve state-of-the-art performance on detection/segmentation and articulation metrics across diverse indoor scenes, demonstrating the effectiveness of our framework and providing a practical foundation for scalable interactive scene generation, thereby lowering the barrier to large-scale research on articulated scene understanding. Our project page is https://react3d.github.io/
Similar Papers
REACT3D: Recovering Articulations for Interactive Physical 3D Scenes
CV and Pattern Recognition
Makes static 3D scenes move and interact.
Articulate3D: Zero-Shot Text-Driven 3D Object Posing
CV and Pattern Recognition
Moves 3D objects with just words.
Particulate: Feed-Forward 3D Object Articulation
CV and Pattern Recognition
Makes 3D objects move like real toys.