StableIntrinsic: Detail-preserving One-step Diffusion Model for Multi-view Material Estimation
By: Xiuchao Wu , Pengfei Zhu , Jiangjing Lyu and more
Potential Business Impact:
Makes computers guess what things are made of faster.
Recovering material information from images has been extensively studied in computer graphics and vision. Recent works in material estimation leverage diffusion model showing promising results. However, these diffusion-based methods adopt a multi-step denoising strategy, which is time-consuming for each estimation. Such stochastic inference also conflicts with the deterministic material estimation task, leading to a high variance estimated results. In this paper, we introduce StableIntrinsic, a one-step diffusion model for multi-view material estimation that can produce high-quality material parameters with low variance. To address the overly-smoothing problem in one-step diffusion, StableIntrinsic applies losses in pixel space, with each loss designed based on the properties of the material. Additionally, StableIntrinsic introduces a Detail Injection Network (DIN) to eliminate the detail loss caused by VAE encoding, while further enhancing the sharpness of material prediction results. The experimental results indicate that our method surpasses the current state-of-the-art techniques by achieving a $9.9\%$ improvement in the Peak Signal-to-Noise Ratio (PSNR) of albedo, and by reducing the Mean Square Error (MSE) for metallic and roughness by $44.4\%$ and $60.0\%$, respectively.
Similar Papers
Intrinsic Image Fusion for Multi-View 3D Material Reconstruction
CV and Pattern Recognition
Makes computer images look like real things.
Towards Spatially Consistent Image Generation: On Incorporating Intrinsic Scene Properties into Diffusion Models
CV and Pattern Recognition
Makes AI pictures look more real and organized.
FROMAT: Multiview Material Appearance Transfer via Few-Shot Self-Attention Adaptation
CV and Pattern Recognition
Changes how things look in 3D pictures.