Score: 0

StableIntrinsic: Detail-preserving One-step Diffusion Model for Multi-view Material Estimation

Published: August 27, 2025 | arXiv ID: 2508.19789v1

By: Xiuchao Wu , Pengfei Zhu , Jiangjing Lyu and more

Potential Business Impact:

Makes computers guess what things are made of faster.

Business Areas:
Advanced Materials Manufacturing, Science and Engineering

Recovering material information from images has been extensively studied in computer graphics and vision. Recent works in material estimation leverage diffusion model showing promising results. However, these diffusion-based methods adopt a multi-step denoising strategy, which is time-consuming for each estimation. Such stochastic inference also conflicts with the deterministic material estimation task, leading to a high variance estimated results. In this paper, we introduce StableIntrinsic, a one-step diffusion model for multi-view material estimation that can produce high-quality material parameters with low variance. To address the overly-smoothing problem in one-step diffusion, StableIntrinsic applies losses in pixel space, with each loss designed based on the properties of the material. Additionally, StableIntrinsic introduces a Detail Injection Network (DIN) to eliminate the detail loss caused by VAE encoding, while further enhancing the sharpness of material prediction results. The experimental results indicate that our method surpasses the current state-of-the-art techniques by achieving a $9.9\%$ improvement in the Peak Signal-to-Noise Ratio (PSNR) of albedo, and by reducing the Mean Square Error (MSE) for metallic and roughness by $44.4\%$ and $60.0\%$, respectively.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition