SJD++: Improved Speculative Jacobi Decoding for Training-free Acceleration of Discrete Auto-regressive Text-to-Image Generation
By: Yao Teng , Zhihuan Jiang , Han Shi and more
Potential Business Impact:
Makes AI create pictures much faster.
Large autoregressive models can generate high-quality, high-resolution images but suffer from slow generation speed, because these models require hundreds to thousands of sequential forward passes for next-token prediction during inference. To accelerate autoregressive text-to-image generation, we propose Speculative Jacobi Decoding++ (SJD++), a training-free probabilistic parallel decoding algorithm. Unlike traditional next-token prediction, SJD++ performs multi-token prediction in each forward pass, drastically reducing generation steps. Specifically, it integrates the iterative multi-token prediction mechanism from Jacobi decoding, with the probabilistic drafting-and-verification mechanism from speculative sampling. More importantly, for further acceleration, SJD++ reuses high-confidence draft tokens after each verification phase instead of resampling them all. We conduct extensive experiments on several representative autoregressive text-to-image generation models and demonstrate that SJD++ achieves $2\times$ to $3\times$ inference latency reduction and $2\times$ to $7\times$ step compression, while preserving visual quality with no observable degradation.
Similar Papers
Speculative Jacobi-Denoising Decoding for Accelerating Autoregressive Text-to-image Generation
CV and Pattern Recognition
Makes AI draw pictures much faster.
MC-SJD : Maximal Coupling Speculative Jacobi Decoding for Autoregressive Visual Generation Acceleration
CV and Pattern Recognition
Makes AI create pictures and videos much faster.
VVS: Accelerating Speculative Decoding for Visual Autoregressive Generation via Partial Verification Skipping
CV and Pattern Recognition
Makes AI create pictures much faster.