OminiControl2: Efficient Conditioning for Diffusion Transformers
By: Zhenxiong Tan , Qiaochu Xue , Xingyi Yang and more
Potential Business Impact:
Makes AI draw pictures faster and better.
Fine-grained control of text-to-image diffusion transformer models (DiT) remains a critical challenge for practical deployment. While recent advances such as OminiControl and others have enabled a controllable generation of diverse control signals, these methods face significant computational inefficiency when handling long conditional inputs. We present OminiControl2, an efficient framework that achieves efficient image-conditional image generation. OminiControl2 introduces two key innovations: (1) a dynamic compression strategy that streamlines conditional inputs by preserving only the most semantically relevant tokens during generation, and (2) a conditional feature reuse mechanism that computes condition token features only once and reuses them across denoising steps. These architectural improvements preserve the original framework's parameter efficiency and multi-modal versatility while dramatically reducing computational costs. Our experiments demonstrate that OminiControl2 reduces conditional processing overhead by over 90% compared to its predecessor, achieving an overall 5.9$\times$ speedup in multi-conditional generation scenarios. This efficiency enables the practical implementation of complex, multi-modal control for high-quality image synthesis with DiT models.
Similar Papers
EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer
CV and Pattern Recognition
Makes AI art creation faster and more flexible.
FullDiT2: Efficient In-Context Conditioning for Video Diffusion Transformers
CV and Pattern Recognition
Makes video creation faster and easier.
NanoControl: A Lightweight Framework for Precise and Efficient Control in Diffusion Transformer
CV and Pattern Recognition
Makes AI art creation faster and cheaper.