Zero-Shot Video Deraining with Video Diffusion Models
By: Tuomas Varanka , Juan Luis Gonzalez , Hyeongwoo Kim and more
Potential Business Impact:
Clears rain from videos without needing special training.
Existing video deraining methods are often trained on paired datasets, either synthetic, which limits their ability to generalize to real-world rain, or captured by static cameras, which restricts their effectiveness in dynamic scenes with background and camera motion. Furthermore, recent works in fine-tuning diffusion models have shown promising results, but the fine-tuning tends to weaken the generative prior, limiting generalization to unseen cases. In this paper, we introduce the first zero-shot video deraining method for complex dynamic scenes that does not require synthetic data nor model fine-tuning, by leveraging a pretrained text-to-video diffusion model that demonstrates strong generalization capabilities. By inverting an input video into the latent space of diffusion models, its reconstruction process can be intervened and pushed away from the model's concept of rain using negative prompting. At the core of our approach is an attention switching mechanism that we found is crucial for maintaining dynamic backgrounds as well as structural consistency between the input and the derained video, mitigating artifacts introduced by naive negative prompting. Our approach is validated through extensive experiments on real-world rain datasets, demonstrating substantial improvements over prior methods and showcasing robust generalization without the need for supervised training.
Similar Papers
Zero-shot Synthetic Video Realism Enhancement via Structure-aware Denoising
CV and Pattern Recognition
Makes fake videos look like real life.
Fitting Image Diffusion Models on Video Datasets
CV and Pattern Recognition
Makes AI create videos that look more real.
Are Image-to-Video Models Good Zero-Shot Image Editors?
CV and Pattern Recognition
Changes pictures using spoken words.