Score: 0

Mesh4D: 4D Mesh Reconstruction and Tracking from Monocular Video

Published: January 8, 2026 | arXiv ID: 2601.05251v1

By: Zeren Jiang , Chuanxia Zheng , Iro Laina and more

Potential Business Impact:

Creates 3D models of moving things from videos.

Business Areas:
Motion Capture Media and Entertainment, Video

We propose Mesh4D, a feed-forward model for monocular 4D mesh reconstruction. Given a monocular video of a dynamic object, our model reconstructs the object's complete 3D shape and motion, represented as a deformation field. Our key contribution is a compact latent space that encodes the entire animation sequence in a single pass. This latent space is learned by an autoencoder that, during training, is guided by the skeletal structure of the training objects, providing strong priors on plausible deformations. Crucially, skeletal information is not required at inference time. The encoder employs spatio-temporal attention, yielding a more stable representation of the object's overall deformation. Building on this representation, we train a latent diffusion model that, conditioned on the input video and the mesh reconstructed from the first frame, predicts the full animation in one shot. We evaluate Mesh4D on reconstruction and novel view synthesis benchmarks, outperforming prior methods in recovering accurate 3D shape and deformation.

Country of Origin
🇬🇧 United Kingdom

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition