PG-ControlNet: A Physics-Guided ControlNet for Generative Spatially Varying Image Deblurring
By: Hakki Motorcu, Mujdat Cetin
Potential Business Impact:
Fixes blurry pictures by understanding how they got blurry.
Spatially varying image deblurring remains a fundamentally ill-posed problem, especially when degradations arise from complex mixtures of motion and other forms of blur under significant noise. State-of-the-art learning-based approaches generally fall into two paradigms: model-based deep unrolling methods that enforce physical constraints by modeling the degradations, but often produce over-smoothed, artifact-laden textures, and generative models that achieve superior perceptual quality yet hallucinate details due to weak physical constraints. In this paper, we propose a novel framework that uniquely reconciles these paradigms by taming a powerful generative prior with explicit, dense physical constraints. Rather than oversimplifying the degradation field, we model it as a dense continuum of high-dimensional compressed kernels, ensuring that minute variations in motion and other degradation patterns are captured. We leverage this rich descriptor field to condition a ControlNet architecture, strongly guiding the diffusion sampling process. Extensive experiments demonstrate that our method effectively bridges the gap between physical accuracy and perceptual realism, outperforming state-of-the-art model-based methods as well as generative baselines in challenging, severely blurred scenarios.
Similar Papers
Physics-Informed Image Restoration via Progressive PDE Integration
CV and Pattern Recognition
Cleans up blurry photos using math.
Generative Photographic Control for Scene-Consistent Video Cinematic Editing
CV and Pattern Recognition
Lets you change movie look like a pro.
Coding-Prior Guided Diffusion Network for Video Deblurring
CV and Pattern Recognition
Makes blurry videos clear by using hidden clues.