SFTok: Bridging the Performance Gap in Discrete Tokenizers
By: Qihang Rao , Borui Zhang , Wenzhao Zheng and more
Potential Business Impact:
Makes pictures look better with fewer details.
Recent advances in multimodal models highlight the pivotal role of image tokenization in high-resolution image generation. By compressing images into compact latent representations, tokenizers enable generative models to operate in lower-dimensional spaces, thereby improving computational efficiency and reducing complexity. Discrete tokenizers naturally align with the autoregressive paradigm but still lag behind continuous ones, limiting their adoption in multimodal systems. To address this, we propose \textbf{SFTok}, a discrete tokenizer that incorporates a multi-step iterative mechanism for precise reconstruction. By integrating \textbf{self-forcing guided visual reconstruction} and \textbf{debias-and-fitting training strategy}, SFTok resolves the training-inference inconsistency in multi-step process, significantly enhancing image reconstruction quality. At a high compression rate of only 64 tokens per image, SFTok achieves state-of-the-art reconstruction quality on ImageNet (rFID = 1.21) and demonstrates exceptional performance in class-to-image generation tasks (gFID = 2.29).
Similar Papers
WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction
CV and Pattern Recognition
Makes pictures smaller without losing detail.
WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction
CV and Pattern Recognition
Makes pictures clearer when made smaller.
RecTok: Reconstruction Distillation along Rectified Flow
CV and Pattern Recognition
Makes AI create better, clearer pictures.