Neural Multi-View Self-Calibrated Photometric Stereo without Photometric Stereo Cues
By: Xu Cao, Takafumi Taketomi
Potential Business Impact:
Makes 3D objects look real from photos.
We propose a neural inverse rendering approach that jointly reconstructs geometry, spatially varying reflectance, and lighting conditions from multi-view images captured under varying directional lighting. Unlike prior multi-view photometric stereo methods that require light calibration or intermediate cues such as per-view normal maps, our method jointly optimizes all scene parameters from raw images in a single stage. We represent both geometry and reflectance as neural implicit fields and apply shadow-aware volume rendering. A spatial network first predicts the signed distance and a reflectance latent code for each scene point. A reflectance network then estimates reflectance values conditioned on the latent code and angularly encoded surface normal, view, and light directions. The proposed method outperforms state-of-the-art normal-guided approaches in shape and lighting estimation accuracy, generalizes to view-unaligned multi-light images, and handles objects with challenging geometry and reflectance.
Similar Papers
Photometric Stereo using Gaussian Splatting and inverse rendering
Image and Video Processing
Makes 3D models from light and shadows.
Multi-view Surface Reconstruction Using Normal and Reflectance Cues
CV and Pattern Recognition
Makes 3D models from pictures more detailed.
Geometry Meets Light: Leveraging Geometric Priors for Universal Photometric Stereo under Limited Multi-Illumination Cues
CV and Pattern Recognition
Makes 3D cameras see shapes better in any light.