V-RGBX: Video Editing with Accurate Controls over Intrinsic Properties
By: Ye Fang , Tong Wu , Valentin Deschaintre and more
Potential Business Impact:
Changes videos by editing their hidden parts.
Large-scale video generation models have shown remarkable potential in modeling photorealistic appearance and lighting interactions in real-world scenes. However, a closed-loop framework that jointly understands intrinsic scene properties (e.g., albedo, normal, material, and irradiance), leverages them for video synthesis, and supports editable intrinsic representations remains unexplored. We present V-RGBX, the first end-to-end framework for intrinsic-aware video editing. V-RGBX unifies three key capabilities: (1) video inverse rendering into intrinsic channels, (2) photorealistic video synthesis from these intrinsic representations, and (3) keyframe-based video editing conditioned on intrinsic channels. At the core of V-RGBX is an interleaved conditioning mechanism that enables intuitive, physically grounded video editing through user-selected keyframes, supporting flexible manipulation of any intrinsic modality. Extensive qualitative and quantitative results show that V-RGBX produces temporally consistent, photorealistic videos while propagating keyframe edits across sequences in a physically plausible manner. We demonstrate its effectiveness in diverse applications, including object appearance editing and scene-level relighting, surpassing the performance of prior methods.
Similar Papers
Light-X: Generative 4D Video Rendering with Camera and Illumination Control
CV and Pattern Recognition
Creates new videos with changing camera and light.
IntrinsicEdit: Precise generative image manipulation in intrinsic space
Graphics
Edits pictures exactly how you want.
IE2Video: Adapting Pretrained Diffusion Models for Event-Based Video Reconstruction
CV and Pattern Recognition
Reconstructs video from less power-hungry sensors.