Adapting Self-Supervised Representations as a Latent Space for Efficient Generation
By: Ming Gui , Johannes Schusterbauer , Timy Phan and more
Potential Business Impact:
Creates detailed pictures from simple ideas.
We introduce Representation Tokenizer (RepTok), a generative modeling framework that represents an image using a single continuous latent token obtained from self-supervised vision transformers. Building on a pre-trained SSL encoder, we fine-tune only the semantic token embedding and pair it with a generative decoder trained jointly using a standard flow matching objective. This adaptation enriches the token with low-level, reconstruction-relevant details, enabling faithful image reconstruction. To preserve the favorable geometry of the original SSL space, we add a cosine-similarity loss that regularizes the adapted token, ensuring the latent space remains smooth and suitable for generation. Our single-token formulation resolves spatial redundancies of 2D latent spaces and significantly reduces training costs. Despite its simplicity and efficiency, RepTok achieves competitive results on class-conditional ImageNet generation and naturally extends to text-to-image synthesis, reaching competitive zero-shot performance on MS-COCO under extremely limited training budgets. Our findings highlight the potential of fine-tuned SSL representations as compact and effective latent spaces for efficient generative modeling.
Similar Papers
RecTok: Reconstruction Distillation along Rectified Flow
CV and Pattern Recognition
Makes AI create better, clearer pictures.
Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning
CV and Pattern Recognition
Teaches computers to understand and create pictures like words.
Towards Scalable Pre-training of Visual Tokenizers for Generation
CV and Pattern Recognition
Makes AI pictures better by understanding meaning.