Spatiotemporally Consistent Indoor Lighting Estimation with Diffusion Priors
By: Mutian Tong, Rundi Wu, Changxi Zheng
Potential Business Impact:
Shows how light changes in a room from video.
Indoor lighting estimation from a single image or video remains a challenge due to its highly ill-posed nature, especially when the lighting condition of the scene varies spatially and temporally. We propose a method that estimates from an input video a continuous light field describing the spatiotemporally varying lighting of the scene. We leverage 2D diffusion priors for optimizing such light field represented as a MLP. To enable zero-shot generalization to in-the-wild scenes, we fine-tune a pre-trained image diffusion model to predict lighting at multiple locations by jointly inpainting multiple chrome balls as light probes. We evaluate our method on indoor lighting estimation from a single image or video and show superior performance over compared baselines. Most importantly, we highlight results on spatiotemporally consistent lighting estimation from in-the-wild videos, which is rarely demonstrated in previous works.
Similar Papers
LuxDiT: Lighting Estimation with Video Diffusion Transformer
Graphics
Makes computer pictures show real-world light.
DiffusionLight-Turbo: Accelerated Light Probes for Free via Single-Pass Chrome Ball Inpainting
CV and Pattern Recognition
Makes pictures show real-world light faster.
Yesnt: Are Diffusion Relighting Models Ready for Capture Stage Compositing? A Hybrid Alternative to Bridge the Gap
CV and Pattern Recognition
Makes virtual actors look real in movies.