Score: 1

GeoDiffMM: Geometry-Guided Conditional Diffusion for Motion Magnification

Published: December 9, 2025 | arXiv ID: 2512.08325v1

By: Xuedeng Liu , Jiabao Guo , Zheng Zhang and more

Potential Business Impact:

Makes tiny movements in videos clearly visible.

Business Areas:
Motion Capture Media and Entertainment, Video

Video Motion Magnification (VMM) amplifies subtle macroscopic motions to a perceptible level. Recently, existing mainstream Eulerian approaches address amplification-induced noise via decoupling representation learning such as texture, shape and frequancey schemes, but they still struggle to separate photon noise from true micro-motion when motion displacements are very small. We propose GeoDiffMM, a novel diffusion-based Lagrangian VMM framework conditioned on optical flow as a geometric cue, enabling structurally consistent motion magnification. Specifically, we design a Noise-free Optical Flow Augmentation strategy that synthesizes diverse nonrigid motion fields without photon noise as supervision, helping the model learn more accurate geometry-aware optial flow and generalize better. Next, we develop a Diffusion Motion Magnifier that conditions the denoising process on (i) optical flow as a geometry prior and (ii) a learnable magnification factor controlling magnitude, thereby selectively amplifying motion components consistent with scene semantics and structure while suppressing content-irrelevant perturbations. Finally, we perform Flow-based Video Synthesis to map the amplified motion back to the image domain with high fidelity. Extensive experiments on real and synthetic datasets show that GeoDiffMM outperforms state-of-the-art methods and significantly improves motion magnification.

Country of Origin
🇨🇳 China

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition