One-Step Diffusion Model for Image Motion-Deblurring
By: Xiaoyang Liu , Yuquan Wang , Zheng Chen and more
Potential Business Impact:
Cleans blurry pictures instantly and perfectly.
Currently, methods for single-image deblurring based on CNNs and transformers have demonstrated promising performance. However, these methods often suffer from perceptual limitations, poor generalization ability, and struggle with heavy or complex blur. While diffusion-based methods can partially address these shortcomings, their multi-step denoising process limits their practical usage. In this paper, we conduct an in-depth exploration of diffusion models in deblurring and propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step, significantly improving inference efficiency while maintaining high fidelity. To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration. Additionally, we construct a high-quality synthetic deblurring dataset to mitigate perceptual collapse and design a dynamic dual-adapter (DDA) to enhance perceptual quality while preserving fidelity. Extensive experiments demonstrate that our method achieves strong performance on both full and no-reference metrics. Our code and pre-trained model will be publicly available at https://github.com/xyLiu339/OSDD.
Similar Papers
DiffVC-OSD: One-Step Diffusion-based Perceptual Neural Video Compression Framework
Image and Video Processing
Makes videos look better, faster, and smaller.
OS-DiffVSR: Towards One-step Latent Diffusion Model for High-detailed Real-world Video Super-Resolution
CV and Pattern Recognition
Makes blurry videos clear, fast.
A Simple Combination of Diffusion Models for Better Quality Trade-Offs in Image Denoising
CV and Pattern Recognition
Cleans up blurry pictures perfectly.