ReLumix: Extending Image Relighting to Video via Video Diffusion Models
By: Lezhong Wang , Shutong Jin , Ruiqi Cui and more
Potential Business Impact:
Changes video lighting easily after filming.
Controlling illumination during video post-production is a crucial yet elusive goal in computational photography. Existing methods often lack flexibility, restricting users to certain relighting models. This paper introduces ReLumix, a novel framework that decouples the relighting algorithm from temporal synthesis, thereby enabling any image relighting technique to be seamlessly applied to video. Our approach reformulates video relighting into a simple yet effective two-stage process: (1) an artist relights a single reference frame using any preferred image-based technique (e.g., Diffusion Models, physics-based renderers); and (2) a fine-tuned stable video diffusion (SVD) model seamlessly propagates this target illumination throughout the sequence. To ensure temporal coherence and prevent artifacts, we introduce a gated cross-attention mechanism for smooth feature blending and a temporal bootstrapping strategy that harnesses SVD's powerful motion priors. Although trained on synthetic data, ReLumix shows competitive generalization to real-world videos. The method demonstrates significant improvements in visual fidelity, offering a scalable and versatile solution for dynamic lighting control.
Similar Papers
UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback
CV and Pattern Recognition
Makes pictures and videos look real with new lighting.
Light-X: Generative 4D Video Rendering with Camera and Illumination Control
CV and Pattern Recognition
Creates new videos with changing camera and light.
Lumen: Consistent Video Relighting and Harmonious Background Replacement with Video Generative Models
CV and Pattern Recognition
Changes video lighting and background with words.