GLOW: Global Illumination-Aware Inverse Rendering of Indoor Scenes Captured with Dynamic Co-Located Light & Camera
By: Jiaye Wu , Saeed Hadadan , Geng Lin and more
Potential Business Impact:
Makes 3D pictures show true colors and light.
Inverse rendering of indoor scenes remains challenging due to the ambiguity between reflectance and lighting, exacerbated by inter-reflections among multiple objects. While natural illumination-based methods struggle to resolve this ambiguity, co-located light-camera setups offer better disentanglement as lighting can be easily calibrated via Structure-from-Motion. However, such setups introduce additional complexities like strong inter-reflections, dynamic shadows, near-field lighting, and moving specular highlights, which existing approaches fail to handle. We present GLOW, a Global Illumination-aware Inverse Rendering framework designed to address these challenges. GLOW integrates a neural implicit surface representation with a neural radiance cache to approximate global illumination, jointly optimizing geometry and reflectance through carefully designed regularization and initialization. We then introduce a dynamic radiance cache that adapts to sharp lighting discontinuities from near-field motion, and a surface-angle-weighted radiometric loss to suppress specular artifacts common in flashlight captures. Experiments show that GLOW substantially outperforms prior methods in material reflectance estimation under both natural and co-located illumination.
Similar Papers
Geometry-Aware Global Feature Aggregation for Real-Time Indirect Illumination
Graphics
Makes virtual worlds look more real.
Taming the Light: Illumination-Invariant Semantic 3DGS-SLAM
CV and Pattern Recognition
Lets robots see clearly in any light.
A Generalizable Light Transport 3D Embedding for Global Illumination
Graphics
Makes computer pictures look real with light.