Mesh4D: 4D Mesh Reconstruction and Tracking from Monocular Video
By: Zeren Jiang , Chuanxia Zheng , Iro Laina and more
Potential Business Impact:
Creates 3D models of moving things from videos.
We propose Mesh4D, a feed-forward model for monocular 4D mesh reconstruction. Given a monocular video of a dynamic object, our model reconstructs the object's complete 3D shape and motion, represented as a deformation field. Our key contribution is a compact latent space that encodes the entire animation sequence in a single pass. This latent space is learned by an autoencoder that, during training, is guided by the skeletal structure of the training objects, providing strong priors on plausible deformations. Crucially, skeletal information is not required at inference time. The encoder employs spatio-temporal attention, yielding a more stable representation of the object's overall deformation. Building on this representation, we train a latent diffusion model that, conditioned on the input video and the mesh reconstructed from the first frame, predicts the full animation in one shot. We evaluate Mesh4D on reconstruction and novel view synthesis benchmarks, outperforming prior methods in recovering accurate 3D shape and deformation.
Similar Papers
V2M4: 4D Mesh Animation Reconstruction from a Single Monocular Video
Graphics
Turns one video into a moving 3D model.
Any4D: Unified Feed-Forward Metric 4D Reconstruction
CV and Pattern Recognition
Makes videos show moving 3D objects accurately.
Drive Any Mesh: 4D Latent Diffusion for Mesh Deformation from Video
CV and Pattern Recognition
Makes 3D models move like real things.