Progressive Image Restoration via Text-Conditioned Video Generation
By: Peng Kang, Xijun Wang, Yu Yuan
Potential Business Impact:
Fixes blurry, dark, or low-quality pictures.
Recent text-to-video models have demonstrated strong temporal generation capabilities, yet their potential for image restoration remains underexplored. In this work, we repurpose CogVideo for progressive visual restoration tasks by fine-tuning it to generate restoration trajectories rather than natural video motion. Specifically, we construct synthetic datasets for super-resolution, deblurring, and low-light enhancement, where each sample depicts a gradual transition from degraded to clean frames. Two prompting strategies are compared: a uniform text prompt shared across all samples, and a scene-specific prompting scheme generated via LLaVA multi-modal LLM and refined with ChatGPT. Our fine-tuned model learns to associate temporal progression with restoration quality, producing sequences that improve perceptual metrics such as PSNR, SSIM, and LPIPS across frames. Extensive experiments show that CogVideo effectively restores spatial detail and illumination consistency while maintaining temporal coherence. Moreover, the model generalizes to real-world scenarios on the ReLoBlur dataset without additional training, demonstrating strong zero-shot robustness and interpretability through temporal restoration.
Similar Papers
Versatile Transition Generation with Image-to-Video Diffusion
CV and Pattern Recognition
Creates smooth video transitions between scenes.
Prompt-based Consistent Video Colorization
CV and Pattern Recognition
Makes old black and white movies colorful automatically.
Visual-CoG: Stage-Aware Reinforcement Learning with Chain of Guidance for Text-to-Image Generation
CV and Pattern Recognition
Makes AI draw better pictures from words.