Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis
By: Bingda Tang , Boyang Zheng , Xichen Pan and more
Potential Business Impact:
Makes AI create better pictures from words.
This paper does not describe a new method; instead, it provides a thorough exploration of an important yet understudied design space related to recent advances in text-to-image synthesis -- specifically, the deep fusion of large language models (LLMs) and diffusion transformers (DiTs) for multi-modal generation. Previous studies mainly focused on overall system performance rather than detailed comparisons with alternative methods, and key design details and training recipes were often left undisclosed. These gaps create uncertainty about the real potential of this approach. To fill these gaps, we conduct an empirical study on text-to-image generation, performing controlled comparisons with established baselines, analyzing important design choices, and providing a clear, reproducible recipe for training at scale. We hope this work offers meaningful data points and practical guidelines for future research in multi-modal generation.
Similar Papers
Enhancing Image Generation Fidelity via Progressive Prompts
CV and Pattern Recognition
Makes AI draw pictures exactly where you want.
X2I: Seamless Integration of Multimodal Understanding into Diffusion Transformer via Attention Distillation
CV and Pattern Recognition
Lets computers create pictures from sounds and videos.
DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation
CV and Pattern Recognition
Makes computers create amazing pictures from words.