MoTDiff: High-resolution Motion Trajectory estimation from a single blurred image using Diffusion models
By: Wontae Choi , Jaelin Lee , Hyung Sup Yun and more
Potential Business Impact:
Makes blurry photos show exact movement paths.
Accurate estimation of motion information is crucial in diverse computational imaging and computer vision applications. Researchers have investigated various methods to extract motion information from a single blurred image, including blur kernels and optical flow. However, existing motion representations are often of low quality, i.e., coarse-grained and inaccurate. In this paper, we propose the first high-resolution (HR) Motion Trajectory estimation framework using Diffusion models (MoTDiff). Different from existing motion representations, we aim to estimate an HR motion trajectory with high-quality from a single motion-blurred image. The proposed MoTDiff consists of two key components: 1) a new conditional diffusion framework that uses multi-scale feature maps extracted from a single blurred image as a condition, and 2) a new training method that can promote precise identification of a fine-grained motion trajectory, consistent estimation of overall shape and position of a motion path, and pixel connectivity along a motion trajectory. Our experiments demonstrate that the proposed MoTDiff can outperform state-of-the-art methods in both blind image deblurring and coded exposure photography applications.
Similar Papers
BlurDM: A Blur Diffusion Model for Image Deblurring
CV and Pattern Recognition
Fixes blurry pictures by reversing how they got blurry.
DM$^3$T: Harmonizing Modalities via Diffusion for Multi-Object Tracking
CV and Pattern Recognition
Helps cars see better in fog and dark.
Back to Basics: Motion Representation Matters for Human Motion Generation Using Diffusion Model
CV and Pattern Recognition
Makes computer-generated dancing look more real.