Scaling Down Text Encoders of Text-to-Image Diffusion Models
By: Lifu Wang , Daqing Liu , Xinchen Liu and more
Potential Business Impact:
Makes AI art generators much smaller and faster.
Text encoders in diffusion models have rapidly evolved, transitioning from CLIP to T5-XXL. Although this evolution has significantly enhanced the models' ability to understand complex prompts and generate text, it also leads to a substantial increase in the number of parameters. Despite T5 series encoders being trained on the C4 natural language corpus, which includes a significant amount of non-visual data, diffusion models with T5 encoder do not respond to those non-visual prompts, indicating redundancy in representational power. Therefore, it raises an important question: "Do we really need such a large text encoder?" In pursuit of an answer, we employ vision-based knowledge distillation to train a series of T5 encoder models. To fully inherit its capabilities, we constructed our dataset based on three criteria: image quality, semantic understanding, and text-rendering. Our results demonstrate the scaling down pattern that the distilled T5-base model can generate images of comparable quality to those produced by T5-XXL, while being 50 times smaller in size. This reduction in model size significantly lowers the GPU requirements for running state-of-the-art models such as FLUX and SD3, making high-quality text-to-image generation more accessible.
Similar Papers
A Comprehensive Study of Decoder-Only LLMs for Text-to-Image Generation
CV and Pattern Recognition
Makes AI draw better pictures from words.
Toward Lightweight and Fast Decoders for Diffusion Models in Image and Video Generation
CV and Pattern Recognition
Makes AI create pictures and videos much faster.
TextInVision: Text and Prompt Complexity Driven Visual Text Generation Benchmark
CV and Pattern Recognition
Makes AI draw pictures with correct words.