Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation
By: Anlin Zheng , Xin Wen , Xuanyang Zhang and more
Potential Business Impact:
Makes computers draw better pictures faster.
Leveraging the powerful representations of pre-trained vision foundation models -- traditionally used for visual comprehension -- we explore a novel direction: building an image tokenizer directly atop such models, a largely underexplored area. Specifically, we employ a frozen vision foundation model as the encoder of our tokenizer. To enhance its effectiveness, we introduce two key components: (1) a region-adaptive quantization framework that reduces redundancy in the pre-trained features on regular 2D grids, and (2) a semantic reconstruction objective that aligns the tokenizer's outputs with the foundation model's representations to preserve semantic fidelity. Based on these designs, our proposed image tokenizer, VFMTok, achieves substantial improvements in image reconstruction and generation quality, while also enhancing token efficiency. It further boosts autoregressive (AR) generation -- achieving a gFID of 2.07 on ImageNet benchmarks, while accelerating model convergence by three times, and enabling high-fidelity class-conditional synthesis without the need for classifier-free guidance (CFG). The code will be released publicly to benefit the community.
Similar Papers
Vision Foundation Models Can Be Good Tokenizers for Latent Diffusion Models
CV and Pattern Recognition
Makes AI art look better and faster.
A Token-level Text Image Foundation Model for Document Understanding
CV and Pattern Recognition
Helps computers read tiny text in pictures.
VTBench: Evaluating Visual Tokenizers for Autoregressive Image Generation
CV and Pattern Recognition
Makes AI draw clearer pictures with better details.