Score: 1

ReLumix: Extending Image Relighting to Video via Video Diffusion Models

Published: September 28, 2025 | arXiv ID: 2509.23769v1

By: Lezhong Wang , Shutong Jin , Ruiqi Cui and more

Potential Business Impact:

Changes video lighting easily after filming.

Business Areas:
Lighting Hardware

Controlling illumination during video post-production is a crucial yet elusive goal in computational photography. Existing methods often lack flexibility, restricting users to certain relighting models. This paper introduces ReLumix, a novel framework that decouples the relighting algorithm from temporal synthesis, thereby enabling any image relighting technique to be seamlessly applied to video. Our approach reformulates video relighting into a simple yet effective two-stage process: (1) an artist relights a single reference frame using any preferred image-based technique (e.g., Diffusion Models, physics-based renderers); and (2) a fine-tuned stable video diffusion (SVD) model seamlessly propagates this target illumination throughout the sequence. To ensure temporal coherence and prevent artifacts, we introduce a gated cross-attention mechanism for smooth feature blending and a temporal bootstrapping strategy that harnesses SVD's powerful motion priors. Although trained on synthetic data, ReLumix shows competitive generalization to real-world videos. The method demonstrates significant improvements in visual fidelity, offering a scalable and versatile solution for dynamic lighting control.

Country of Origin
πŸ‡©πŸ‡° πŸ‡ΈπŸ‡ͺ Denmark, Sweden

Page Count
14 pages

Category
Computer Science:
Graphics