Sortblock: Similarity-Aware Feature Reuse for Diffusion Model
By: Hanqi Chen , Xu Zhang , Xiaoliu Guan and more
Potential Business Impact:
Makes AI art generators work much faster.
Diffusion Transformers (DiTs) have demonstrated remarkable generative capabilities, particularly benefiting from Transformer architectures that enhance visual and artistic fidelity. However, their inherently sequential denoising process results in high inference latency, limiting their deployment in real-time scenarios. Existing training-free acceleration approaches typically reuse intermediate features at fixed timesteps or layers, overlooking the evolving semantic focus across denoising stages and Transformer blocks.To address this, we propose Sortblock, a training-free inference acceleration framework that dynamically caches block-wise features based on their similarity across adjacent timesteps. By ranking the evolution of residuals, Sortblock adaptively determines a recomputation ratio, selectively skipping redundant computations while preserving generation quality. Furthermore, we incorporate a lightweight linear prediction mechanism to reduce accumulated errors in skipped blocks.Extensive experiments across various tasks and DiT architectures demonstrate that Sortblock achieves over 2$\times$ inference speedup with minimal degradation in output quality, offering an effective and generalizable solution for accelerating diffusion-based generative models.
Similar Papers
BWCache: Accelerating Video Diffusion Transformers through Block-Wise Caching
CV and Pattern Recognition
Makes AI videos faster without losing quality.
BWCache: Accelerating Video Diffusion Transformers through Block-Wise Caching
CV and Pattern Recognition
Makes video creation much faster without losing quality.
BlockDance: Reuse Structurally Similar Spatio-Temporal Features to Accelerate Diffusion Transformers
CV and Pattern Recognition
Makes AI art creation much faster.