Score: 1

VUGEN: Visual Understanding priors for GENeration

Published: October 8, 2025 | arXiv ID: 2510.06529v1

By: Xiangyi Chen , Théophane Vallaeys , Maha Elbayad and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes computers draw pictures from words.

Business Areas:
Visual Search Internet Services

Recent advances in Vision-Language Models (VLMs) have enabled unified understanding across text and images, yet equipping these models with robust image generation capabilities remains challenging. Existing approaches often rely on reconstruction-oriented autoencoders or complex bridging mechanisms, leading to misalignment between understanding and generation representations, or architectural complexity. In this work, we propose VUGEN, a novel framework that explicitly leverages VLM's pretrained visual understanding priors for efficient and high-quality image generation. Our approach first transforms the high-dimensional latent space of the VLM's native vision encoder into a lower-dimensional, tractable distribution that maximally preserves visual information. The VLM is then trained to sample within this reduced latent space, ensuring alignment with its visual understanding capabilities. Finally, a dedicated pixel decoder maps these generated latents back to the image space. We find that a VAE-free pixel diffusion decoder to be on par or better than commonly used complex latent diffusion decoders that internally rely on VAE latents. Extensive experiments demonstrate that VUGEN achieves superior image generation performance, improving DPG Bench from 71.17 to 74.32 and FID from 11.86 to 9.06 on COCO, while fully preserving the VLM's original understanding capabilities.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition