GR3EN: Generative Relighting for 3D Environments
By: Xiaoyan Xing , Philipp Henzler , Junhwa Hur and more
Potential Business Impact:
Changes how rooms look in 3D with new lights.
We present a method for relighting 3D reconstructions of large room-scale environments. Existing solutions for 3D scene relighting often require solving under-determined or ill-conditioned inverse rendering problems, and are as such unable to produce high-quality results on complex real-world scenes. Though recent progress in using generative image and video diffusion models for relighting has been promising, these techniques are either limited to 2D image and video relighting or 3D relighting of individual objects. Our approach enables controllable 3D relighting of room-scale scenes by distilling the outputs of a video-to-video relighting diffusion model into a 3D reconstruction. This side-steps the need to solve a difficult inverse rendering problem, and results in a flexible system that can relight 3D reconstructions of complex real-world scenes. We validate our approach on both synthetic and real-world datasets to show that it can faithfully render novel views of scenes under new lighting conditions.
Similar Papers
Gen3R: 3D Scene Generation Meets Feed-Forward Reconstruction
CV and Pattern Recognition
Creates 3D worlds from pictures and videos.
LightSwitch: Multi-view Relighting with Material-guided Diffusion
CV and Pattern Recognition
Changes how objects look under different lights.
3D-RE-GEN: 3D Reconstruction of Indoor Scenes with a Generative Framework
CV and Pattern Recognition
Builds 3D worlds from one picture.