Are First-Order Diffusion Samplers Really Slower? A Fast Forward-Value Approach
By: Yuchen Jiao , Na Li , Changxiao Cai and more
Higher-order ODE solvers have become a standard tool for accelerating diffusion probabilistic model (DPM) sampling, motivating the widespread view that first-order methods are inherently slower and that increasing discretization order is the primary path to faster generation. This paper challenges this belief and revisits acceleration from a complementary angle: beyond solver order, the placement of DPM evaluations along the reverse-time dynamics can substantially affect sampling accuracy in the low-neural function evaluation (NFE) regime. We propose a novel training-free, first-order sampler whose leading discretization error has the opposite sign to that of DDIM. Algorithmically, the method approximates the forward-value evaluation via a cheap one-step lookahead predictor. We provide theoretical guarantees showing that the resulting sampler provably approximates the ideal forward-value trajectory while retaining first-order convergence. Empirically, across standard image generation benchmarks (CIFAR-10, ImageNet, FFHQ, and LSUN), the proposed sampler consistently improves sample quality under the same NFE budget and can be competitive with, and sometimes outperform, state-of-the-art higher-order samplers. Overall, the results suggest that the placement of DPM evaluations provides an additional and largely independent design angle for accelerating diffusion sampling.
Similar Papers
DualFast: Dual-Speedup Framework for Fast Sampling of Diffusion Models
CV and Pattern Recognition
Makes AI art generators create pictures much faster.
Sublinear iterations can suffice even for DDPMs
Machine Learning (CS)
Makes AI create better pictures using less math.
FSampler: Training Free Acceleration of Diffusion Sampling via Epsilon Extrapolation
Machine Learning (CS)
Makes AI art creation much faster.