Score: 1

One-Step Diffusion Model for Image Motion-Deblurring

Published: March 9, 2025 | arXiv ID: 2503.06537v1

By: Xiaoyang Liu , Yuquan Wang , Zheng Chen and more

Potential Business Impact:

Cleans blurry pictures instantly and perfectly.

Business Areas:
Autonomous Vehicles Transportation

Currently, methods for single-image deblurring based on CNNs and transformers have demonstrated promising performance. However, these methods often suffer from perceptual limitations, poor generalization ability, and struggle with heavy or complex blur. While diffusion-based methods can partially address these shortcomings, their multi-step denoising process limits their practical usage. In this paper, we conduct an in-depth exploration of diffusion models in deblurring and propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step, significantly improving inference efficiency while maintaining high fidelity. To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration. Additionally, we construct a high-quality synthetic deblurring dataset to mitigate perceptual collapse and design a dynamic dual-adapter (DDA) to enhance perceptual quality while preserving fidelity. Extensive experiments demonstrate that our method achieves strong performance on both full and no-reference metrics. Our code and pre-trained model will be publicly available at https://github.com/xyLiu339/OSDD.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition