Articulated Object Estimation in the Wild
By: Abdelrhman Werby , Martin Büchner , Adrian Röfer and more
Potential Business Impact:
Robots learn to move objects like humans.
Understanding the 3D motion of articulated objects is essential in robotic scene understanding, mobile manipulation, and motion planning. Prior methods for articulation estimation have primarily focused on controlled settings, assuming either fixed camera viewpoints or direct observations of various object states, which tend to fail in more realistic unconstrained environments. In contrast, humans effortlessly infer articulation by watching others manipulate objects. Inspired by this, we introduce ArtiPoint, a novel estimation framework that can infer articulated object models under dynamic camera motion and partial observability. By combining deep point tracking with a factor graph optimization framework, ArtiPoint robustly estimates articulated part trajectories and articulation axes directly from raw RGB-D videos. To foster future research in this domain, we introduce Arti4D, the first ego-centric in-the-wild dataset that captures articulated object interactions at a scene level, accompanied by articulation labels and ground-truth camera poses. We benchmark ArtiPoint against a range of classical and learning-based baselines, demonstrating its superior performance on Arti4D. We make code and Arti4D publicly available at https://artipoint.cs.uni-freiburg.de.
Similar Papers
Generalizable Articulated Object Reconstruction from Casually Captured RGBD Videos
Graphics
Lets robots build and move objects better.
ArtiWorld: LLM-Driven Articulation of 3D Objects in Scenes
CV and Pattern Recognition
Turns static 3D objects into interactive robot parts.
sim2art: Accurate Articulated Object Modeling from a Single Video using Synthetic Training Data Only
CV and Pattern Recognition
Lets robots understand how things bend and move.