4D Primitive-Mâché: Glueing Primitives for Persistent 4D Scene Reconstruction
By: Kirill Mazur, Marwan Taher, Andrew J. Davison
We present a dynamic reconstruction system that receives a casual monocular RGB video as input, and outputs a complete and persistent reconstruction of the scene. In other words, we reconstruct not only the the currently visible parts of the scene, but also all previously viewed parts, which enables replaying the complete reconstruction across all timesteps. Our method decomposes the scene into a set of rigid 3D primitives, which are assumed to be moving throughout the scene. Using estimated dense 2D correspondences, we jointly infer the rigid motion of these primitives through an optimisation pipeline, yielding a 4D reconstruction of the scene, i.e. providing 3D geometry dynamically moving through time. To achieve this, we also introduce a mechanism to extrapolate motion for objects that become invisible, employing motion-grouping techniques to maintain continuity. The resulting system enables 4D spatio-temporal awareness, offering capabilities such as replayable 3D reconstructions of articulated objects through time, multi-object scanning, and object permanence. On object scanning and multi-object datasets, our system significantly outperforms existing methods both quantitatively and qualitatively.
Similar Papers
C4D: 4D Made from 3D through Dual Correspondences
CV and Pattern Recognition
Makes videos show moving things in 3D.
Motion4D: Learning 3D-Consistent Motion and Semantics for 4D Scene Understanding
CV and Pattern Recognition
Makes videos show 3D worlds without flickering.
Joint 3D Geometry Reconstruction and Motion Generation for 4D Synthesis from a Single Image
CV and Pattern Recognition
Makes one picture move and change like a video.