Score: 2

Zero-Shot Video Deraining with Video Diffusion Models

Published: November 23, 2025 | arXiv ID: 2511.18537v1

By: Tuomas Varanka , Juan Luis Gonzalez , Hyeongwoo Kim and more

Potential Business Impact:

Clears rain from videos without needing special training.

Business Areas:
Image Recognition Data and Analytics, Software

Existing video deraining methods are often trained on paired datasets, either synthetic, which limits their ability to generalize to real-world rain, or captured by static cameras, which restricts their effectiveness in dynamic scenes with background and camera motion. Furthermore, recent works in fine-tuning diffusion models have shown promising results, but the fine-tuning tends to weaken the generative prior, limiting generalization to unseen cases. In this paper, we introduce the first zero-shot video deraining method for complex dynamic scenes that does not require synthetic data nor model fine-tuning, by leveraging a pretrained text-to-video diffusion model that demonstrates strong generalization capabilities. By inverting an input video into the latent space of diffusion models, its reconstruction process can be intervened and pushed away from the model's concept of rain using negative prompting. At the core of our approach is an attention switching mechanism that we found is crucial for maintaining dynamic backgrounds as well as structural consistency between the input and the derained video, mitigating artifacts introduced by naive negative prompting. Our approach is validated through extensive experiments on real-world rain datasets, demonstrating substantial improvements over prior methods and showcasing robust generalization without the need for supervised training.

Country of Origin
🇫🇮 🇬🇧 Finland, United Kingdom

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition