HAODiff: Human-Aware One-Step Diffusion via Dual-Prompt Guidance
By: Jue Gong , Tingyu Yang , Jingkai Wang and more
Potential Business Impact:
Fixes blurry, noisy pictures of people.
Human-centered images often suffer from severe generic degradation during transmission and are prone to human motion blur (HMB), making restoration challenging. Existing research lacks sufficient focus on these issues, as both problems often coexist in practice. To address this, we design a degradation pipeline that simulates the coexistence of HMB and generic noise, generating synthetic degraded data to train our proposed HAODiff, a human-aware one-step diffusion. Specifically, we propose a triple-branch dual-prompt guidance (DPG), which leverages high-quality images, residual noise (LQ minus HQ), and HMB segmentation masks as training targets. It produces a positive-negative prompt pair for classifier-free guidance (CFG) in a single diffusion step. The resulting adaptive dual prompts let HAODiff exploit CFG more effectively, boosting robustness against diverse degradations. For fair evaluation, we introduce MPII-Test, a benchmark rich in combined noise and HMB cases. Extensive experiments show that our HAODiff surpasses existing state-of-the-art (SOTA) methods in terms of both quantitative metrics and visual quality on synthetic and real-world datasets, including our introduced MPII-Test. Code is available at: https://github.com/gobunu/HAODiff.
Similar Papers
Human Body Restoration with One-Step Diffusion Model and A New Benchmark
CV and Pattern Recognition
Fixes blurry pictures of people automatically.
An Image-like Diffusion Method for Human-Object Interaction Detection
CV and Pattern Recognition
Teaches computers to see people doing things.
Multi-Step Guided Diffusion for Image Restoration on Edge Devices: Toward Lightweight Perception in Embodied AI
CV and Pattern Recognition
Improves blurry pictures for robots and drones.