NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors
By: Yanrui Bin , Wenbo Hu , Haoyuan Wang and more
Potential Business Impact:
Makes videos show 3D shapes smoothly.
Surface normal estimation serves as a cornerstone for a spectrum of computer vision applications. While numerous efforts have been devoted to static image scenarios, ensuring temporal coherence in video-based normal estimation remains a formidable challenge. Instead of merely augmenting existing methods with temporal components, we present NormalCrafter to leverage the inherent temporal priors of video diffusion models. To secure high-fidelity normal estimation across sequences, we propose Semantic Feature Regularization (SFR), which aligns diffusion features with semantic cues, encouraging the model to concentrate on the intrinsic semantics of the scene. Moreover, we introduce a two-stage training protocol that leverages both latent and pixel space learning to preserve spatial accuracy while maintaining long temporal context. Extensive evaluations demonstrate the efficacy of our method, showcasing a superior performance in generating temporally consistent normal sequences with intricate details from diverse videos.
Similar Papers
SpatialCrafter: Unleashing the Imagination of Video Diffusion Models for Scene Reconstruction from Limited Observations
CV and Pattern Recognition
Makes 3D pictures from just one or few photos.
GeometryCrafter: Consistent Geometry Estimation for Open-world Videos with Diffusion Priors
Graphics
Builds 3D worlds from videos accurately.
A 3DGS-Diffusion Self-Supervised Framework for Normal Estimation from a Single Image
CV and Pattern Recognition
Makes 3D shapes from one picture.