DINO-Tok: Adapting DINO for Visual Tokenizers
By: Mingkai Jia , Mingxiao Li , Liaoyuan Fan and more
Potential Business Impact:
Makes AI create clearer, smarter pictures from basic ideas.
Recent advances in visual generation have highlighted the rise of Latent Generative Models (LGMs), which rely on effective visual tokenizers to bridge pixels and semantics. However, existing tokenizers are typically trained from scratch and struggle to balance semantic representation and reconstruction fidelity, particularly in high-dimensional latent spaces. In this work, we introduce DINO-Tok, a DINO-based visual tokenizer that unifies hierarchical representations into an information-complete latent space. By integrating shallow features that retain fine-grained details with deep features encoding global semantics, DINO-Tok effectively bridges pretrained representations and visual generation. We further analyze the challenges of vector quantization (VQ) in this high-dimensional space, where key information is often lost and codebook collapse occurs. We thus propose a global PCA reweighting mechanism to stabilize VQ and preserve essential information across dimensions. On ImageNet 256$\times$256, DINO-Tok achieves state-of-the-art reconstruction performance, reaching 28.54 PSNR for autoencoding and 23.98 PSNR for VQ-based modeling, significantly outperforming prior tokenizers and comparable to billion-level data trained models (such as Hunyuan and Wan). These results demonstrate that adapting powerful pretrained vision models like DINO for tokenization enables semantically aligned and high-fidelity latent representations, enabling next-generation visual generative models. Code will be publicly available at https://github.com/MKJia/DINO-Tok.
Similar Papers
WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction
CV and Pattern Recognition
Makes pictures smaller without losing detail.
WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction
CV and Pattern Recognition
Makes pictures clearer when made smaller.
2D Gaussians Meet Visual Tokenizer
CV and Pattern Recognition
Makes computer images look more real.