Score: 0

Drive Any Mesh: 4D Latent Diffusion for Mesh Deformation from Video

Published: June 9, 2025 | arXiv ID: 2506.07489v1

By: Yahao Shi , Yang Liu , Yanmin Wu and more

Potential Business Impact:

Makes 3D models move like real things.

Business Areas:
Autonomous Vehicles Transportation

We propose DriveAnyMesh, a method for driving mesh guided by monocular video. Current 4D generation techniques encounter challenges with modern rendering engines. Implicit methods have low rendering efficiency and are unfriendly to rasterization-based engines, while skeletal methods demand significant manual effort and lack cross-category generalization. Animating existing 3D assets, instead of creating 4D assets from scratch, demands a deep understanding of the input's 3D structure. To tackle these challenges, we present a 4D diffusion model that denoises sequences of latent sets, which are then decoded to produce mesh animations from point cloud trajectory sequences. These latent sets leverage a transformer-based variational autoencoder, simultaneously capturing 3D shape and motion information. By employing a spatiotemporal, transformer-based diffusion model, information is exchanged across multiple latent frames, enhancing the efficiency and generalization of the generated results. Our experimental results demonstrate that DriveAnyMesh can rapidly produce high-quality animations for complex motions and is compatible with modern rendering engines. This method holds potential for applications in both the gaming and filming industries.

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition