SVG-T2I: Scaling Up Text-to-Image Latent Diffusion Model Without Variational Autoencoder
By: Minglei Shi , Haolin Wang , Borui Zhang and more
Potential Business Impact:
Makes computers create pictures from words.
Visual generation grounded in Visual Foundation Model (VFM) representations offers a highly promising unified pathway for integrating visual understanding, perception, and generation. Despite this potential, training large-scale text-to-image diffusion models entirely within the VFM representation space remains largely unexplored. To bridge this gap, we scale the SVG (Self-supervised representations for Visual Generation) framework, proposing SVG-T2I to support high-quality text-to-image synthesis directly in the VFM feature domain. By leveraging a standard text-to-image diffusion pipeline, SVG-T2I achieves competitive performance, reaching 0.75 on GenEval and 85.78 on DPG-Bench. This performance validates the intrinsic representational power of VFMs for generative tasks. We fully open-source the project, including the autoencoder and generation model, together with their training, inference, evaluation pipelines, and pre-trained weights, to facilitate further research in representation-driven visual generation.
Similar Papers
Latent Diffusion Model without Variational Autoencoder
CV and Pattern Recognition
Makes AI create pictures faster and better.
Style Customization of Text-to-Vector Generation with Image Diffusion Priors
Graphics
Makes computer drawings match any art style.
UniVG: A Generalist Diffusion Model for Unified Image Generation and Editing
CV and Pattern Recognition
One AI makes many kinds of pictures.