Score: 1

NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors

Published: April 15, 2025 | arXiv ID: 2504.11427v1

By: Yanrui Bin , Wenbo Hu , Haoyuan Wang and more

Potential Business Impact:

Makes videos show 3D shapes smoothly.

Business Areas:
Image Recognition Data and Analytics, Software

Surface normal estimation serves as a cornerstone for a spectrum of computer vision applications. While numerous efforts have been devoted to static image scenarios, ensuring temporal coherence in video-based normal estimation remains a formidable challenge. Instead of merely augmenting existing methods with temporal components, we present NormalCrafter to leverage the inherent temporal priors of video diffusion models. To secure high-fidelity normal estimation across sequences, we propose Semantic Feature Regularization (SFR), which aligns diffusion features with semantic cues, encouraging the model to concentrate on the intrinsic semantics of the scene. Moreover, we introduce a two-stage training protocol that leverages both latent and pixel space learning to preserve spatial accuracy while maintaining long temporal context. Extensive evaluations demonstrate the efficacy of our method, showcasing a superior performance in generating temporally consistent normal sequences with intricate details from diverse videos.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition