sim2art: Accurate Articulated Object Modeling from a Single Video using Synthetic Training Data Only
By: Arslan Artykov, Corentin Sautier, Vincent Lepetit
Potential Business Impact:
Lets robots understand how things bend and move.
Understanding articulated objects is a fundamental challenge in robotics and digital twin creation. To effectively model such objects, it is essential to recover both part segmentation and the underlying joint parameters. Despite the importance of this task, previous work has largely focused on setups like multi-view systems, object scanning, or static cameras. In this paper, we present the first data-driven approach that jointly predicts part segmentation and joint parameters from monocular video captured with a freely moving camera. Trained solely on synthetic data, our method demonstrates strong generalization to real-world objects, offering a scalable and practical solution for articulated object understanding. Our approach operates directly on casually recorded video, making it suitable for real-time applications in dynamic environments. Project webpage: https://aartykov.github.io/sim2art/
Similar Papers
VideoArtGS: Building Digital Twins of Articulated Objects from Monocular Video
CV and Pattern Recognition
Creates 3D models of moving objects from video.
Generalizable Articulated Object Reconstruction from Casually Captured RGBD Videos
Graphics
Lets robots build and move objects better.
ArtiWorld: LLM-Driven Articulation of 3D Objects in Scenes
CV and Pattern Recognition
Turns static 3D objects into interactive robot parts.