Intrinsic Image Fusion for Multi-View 3D Material Reconstruction
By: Peter Kocsis, Lukas Höllein, Matthias Nießner
Potential Business Impact:
Makes computer images look like real things.
We introduce Intrinsic Image Fusion, a method that reconstructs high-quality physically based materials from multi-view images. Material reconstruction is highly underconstrained and typically relies on analysis-by-synthesis, which requires expensive and noisy path tracing. To better constrain the optimization, we incorporate single-view priors into the reconstruction process. We leverage a diffusion-based material estimator that produces multiple, but often inconsistent, candidate decompositions per view. To reduce the inconsistency, we fit an explicit low-dimensional parametric function to the predictions. We then propose a robust optimization framework using soft per-view prediction selection together with confidence-based soft multi-view inlier set to fuse the most consistent predictions of the most confident views into a consistent parametric material space. Finally, we use inverse path tracing to optimize for the low-dimensional parameters. Our results outperform state-of-the-art methods in material disentanglement on both synthetic and real scenes, producing sharp and clean reconstructions suitable for high-quality relighting.
Similar Papers
Texture-aware Intrinsic Image Decomposition with Model- and Learning-based Priors
CV and Pattern Recognition
Separates object colors from lighting in photos.
StableIntrinsic: Detail-preserving One-step Diffusion Model for Multi-view Material Estimation
CV and Pattern Recognition
Makes computers guess what things are made of faster.
Material-informed Gaussian Splatting for 3D World Reconstruction in a Digital Twin
CV and Pattern Recognition
Creates digital twins of real places using only cameras.