D$^2$iT: Dynamic Diffusion Transformer for Accurate Image Generation
By: Weinan Jia , Mengqi Huang , Nan Chen and more
Potential Business Impact:
Makes computer pictures look more real and detailed.
Diffusion models are widely recognized for their ability to generate high-fidelity images. Despite the excellent performance and scalability of the Diffusion Transformer (DiT) architecture, it applies fixed compression across different image regions during the diffusion process, disregarding the naturally varying information densities present in these regions. However, large compression leads to limited local realism, while small compression increases computational complexity and compromises global consistency, ultimately impacting the quality of generated images. To address these limitations, we propose dynamically compressing different image regions by recognizing the importance of different regions, and introduce a novel two-stage framework designed to enhance the effectiveness and efficiency of image generation: (1) Dynamic VAE (DVAE) at first stage employs a hierarchical encoder to encode different image regions at different downsampling rates, tailored to their specific information densities, thereby providing more accurate and natural latent codes for the diffusion process. (2) Dynamic Diffusion Transformer (D$^2$iT) at second stage generates images by predicting multi-grained noise, consisting of coarse-grained (less latent code in smooth regions) and fine-grained (more latent codes in detailed regions), through an novel combination of the Dynamic Grain Transformer and the Dynamic Content Transformer. The strategy of combining rough prediction of noise with detailed regions correction achieves a unification of global consistency and local realism. Comprehensive experiments on various generation tasks validate the effectiveness of our approach. Code will be released at https://github.com/jiawn-creator/Dynamic-DiT.
Similar Papers
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
CV and Pattern Recognition
Makes AI art creation much faster and cheaper.
DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation
CV and Pattern Recognition
Makes computers create amazing pictures from words.
DDiT: Dynamic Resource Allocation for Diffusion Transformer Model Serving
Distributed, Parallel, and Cluster Computing
Makes computer videos from words faster.