EDiT: Efficient Diffusion Transformers with Linear Compressed Attention
By: Philipp Becker , Abhinav Mehrotra , Ruchika Chavhan and more
Potential Business Impact:
Makes AI create better pictures faster.
Diffusion Transformers (DiTs) have emerged as a leading architecture for text-to-image synthesis, producing high-quality and photorealistic images. However, the quadratic scaling properties of the attention in DiTs hinder image generation with higher resolution or on devices with limited resources. This work introduces an efficient diffusion transformer (EDiT) to alleviate these efficiency bottlenecks in conventional DiTs and Multimodal DiTs (MM-DiTs). First, we present a novel linear compressed attention method that uses a multi-layer convolutional network to modulate queries with local information while keys and values are aggregated spatially. Second, we formulate a hybrid attention scheme for multimodal inputs that combines linear attention for image-to-image interactions and standard scaled dot-product attention for interactions involving prompts. Merging these two approaches leads to an expressive, linear-time Multimodal Efficient Diffusion Transformer (MM-EDiT). We demonstrate the effectiveness of the EDiT and MM-EDiT architectures by integrating them into PixArt-Sigma (conventional DiT) and Stable Diffusion 3.5-Medium (MM-DiT), achieving up to 2.2x speedup with comparable image quality after distillation.
Similar Papers
Exploring Multimodal Diffusion Transformers for Enhanced Prompt-based Image Editing
CV and Pattern Recognition
Changes pictures using words, better than before.
DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation
CV and Pattern Recognition
Makes computers create amazing pictures from words.
LiT: Delving into a Simple Linear Diffusion Transformer for Image Generation
CV and Pattern Recognition
Makes AI draw pictures much faster and better.