AsyncDiff: Asynchronous Timestep Conditioning for Enhanced Text-to-Image Diffusion Inference
By: Longhuan Xu, Feng Yin, Cunjian Chen
Text-to-image diffusion inference typically follows synchronized schedules, where the numerical integrator advances the latent state to the same timestep at which the denoiser is conditioned. We propose an asynchronous inference mechanism that decouples these two, allowing the denoiser to be conditioned at a different, learned timestep while keeping image update schedule unchanged. A lightweight timestep prediction module (TPM), trained with Group Relative Policy Optimization (GRPO), selects a more feasible conditioning timestep based on the current state, effectively choosing a desired noise level to control image detail and textural richness. At deployment, a scaling hyper-parameter can be used to interpolate between the original and de-synchronized timesteps, enabling conservative or aggressive adjustments. To keep the study computationally affordable, we cap the inference at 15 steps for SD3.5 and 10 steps for Flux. Evaluated on Stable Diffusion 3.5 Medium and Flux.1-dev across MS-COCO 2014 and T2I-CompBench datasets, our method optimizes a composite reward that averages Image Reward, HPSv2, CLIP Score and Pick Score, and shows consistent improvement.
Similar Papers
Asynchronous Denoising Diffusion Models for Aligning Text-to-Image Generation
CV and Pattern Recognition
Makes AI pictures match words better.
Image-Free Timestep Distillation via Continuous-Time Consistency with Trajectory-Sampled Pairs
CV and Pattern Recognition
Makes AI create pictures much faster.
ADiff4TPP: Asynchronous Diffusion Models for Temporal Point Processes
Machine Learning (CS)
Predicts future events more accurately, even far ahead.