SyncLight: Controllable and Consistent Multi-View Relighting
By: David Serrano-Lozano , Anand Bhattad , Luis Herranz and more
Potential Business Impact:
Changes lighting in many movie cameras at once.
We present SyncLight, the first method to enable consistent, parametric relighting across multiple uncalibrated views of a static scene. While single-view relighting has advanced significantly, existing generative approaches struggle to maintain the rigorous lighting consistency essential for multi-camera broadcasts, stereoscopic cinema, and virtual production. SyncLight addresses this by enabling precise control over light intensity and color across a multi-view capture of a scene, conditioned on a single reference edit. Our method leverages a multi-view diffusion transformer trained using a latent bridge matching formulation, achieving high-fidelity relighting of the entire image set in a single inference step. To facilitate training, we introduce a large-scale hybrid dataset comprising diverse synthetic environments -- curated from existing sources and newly designed scenes -- alongside high-fidelity, real-world multi-view captures under calibrated illumination. Surprisingly, though trained only on image pairs, SyncLight generalizes zero-shot to an arbitrary number of viewpoints, effectively propagating lighting changes across all views, without requiring camera pose information. SyncLight enables practical relighting workflows for multi-view capture systems.
Similar Papers
LightSwitch: Multi-view Relighting with Material-guided Diffusion
CV and Pattern Recognition
Changes how objects look under different lights.
Light-X: Generative 4D Video Rendering with Camera and Illumination Control
CV and Pattern Recognition
Creates new videos with changing camera and light.
Training-Free Multi-View Extension of IC-Light for Textual Position-Aware Scene Relighting
CV and Pattern Recognition
Changes 3D scenes' lighting with text prompts.