Particulate: Feed-Forward 3D Object Articulation
By: Ruining Li , Yuxin Yao , Chuanxia Zheng and more
Potential Business Impact:
Makes 3D objects move like real toys.
We present Particulate, a feed-forward approach that, given a single static 3D mesh of an everyday object, directly infers all attributes of the underlying articulated structure, including its 3D parts, kinematic structure, and motion constraints. At its core is a transformer network, Part Articulation Transformer, which processes a point cloud of the input mesh using a flexible and scalable architecture to predict all the aforementioned attributes with native multi-joint support. We train the network end-to-end on a diverse collection of articulated 3D assets from public datasets. During inference, Particulate lifts the network's feed-forward prediction to the input mesh, yielding a fully articulated 3D model in seconds, much faster than prior approaches that require per-object optimization. Particulate can also accurately infer the articulated structure of AI-generated 3D assets, enabling full-fledged extraction of articulated 3D objects from a single (real or synthetic) image when combined with an off-the-shelf image-to-3D generator. We further introduce a new challenging benchmark for 3D articulation estimation curated from high-quality public 3D assets, and redesign the evaluation protocol to be more consistent with human preferences. Quantitative and qualitative results show that Particulate significantly outperforms state-of-the-art approaches.
Similar Papers
ArtiLatent: Realistic Articulated 3D Object Generation via Structured Latents
CV and Pattern Recognition
Creates realistic 3D objects that can move.
GaussianArt: Unified Modeling of Geometry and Motion for Articulated Objects
CV and Pattern Recognition
Builds realistic 3D models of moving objects.
Articulate3D: Zero-Shot Text-Driven 3D Object Posing
CV and Pattern Recognition
Moves 3D objects with just words.