ETC: training-free diffusion models acceleration with Error-aware Trend Consistency
By: Jiajian Xie , Hubery Yin , Chen Li and more
Potential Business Impact:
Makes AI art faster without losing quality.
Diffusion models have achieved remarkable generative quality but remain bottlenecked by costly iterative sampling. Recent training-free methods accelerate diffusion process by reusing model outputs. However, these methods ignore denoising trends and lack error control for model-specific tolerance, leading to trajectory deviations under multi-step reuse and exacerbating inconsistencies in the generated results. To address these issues, we introduce Error-aware Trend Consistency (ETC), a framework that (1) introduces a consistent trend predictor that leverages the smooth continuity of diffusion trajectories, projecting historical denoising patterns into stable future directions and progressively distributing them across multiple approximation steps to achieve acceleration without deviating; (2) proposes a model-specific error tolerance search mechanism that derives corrective thresholds by identifying transition points from volatile semantic planning to stable quality refinement. Experiments show that ETC achieves a 2.65x acceleration over FLUX with negligible (-0.074 SSIM score) degradation of consistency.
Similar Papers
ECTSpeech: Enhancing Efficient Speech Synthesis via Easy Consistency Tuning
Sound
Makes computer voices sound real, faster to train.
Plug-and-Play Fidelity Optimization for Diffusion Transformer Acceleration via Cumulative Error Minimization
CV and Pattern Recognition
Makes AI art and videos create much faster.
Image-Free Timestep Distillation via Continuous-Time Consistency with Trajectory-Sampled Pairs
CV and Pattern Recognition
Makes AI create pictures much faster.