Score: 2

SFTok: Bridging the Performance Gap in Discrete Tokenizers

Published: December 18, 2025 | arXiv ID: 2512.16910v1

By: Qihang Rao , Borui Zhang , Wenzhao Zheng and more

Potential Business Impact:

Makes pictures look better with fewer details.

Business Areas:
Image Recognition Data and Analytics, Software

Recent advances in multimodal models highlight the pivotal role of image tokenization in high-resolution image generation. By compressing images into compact latent representations, tokenizers enable generative models to operate in lower-dimensional spaces, thereby improving computational efficiency and reducing complexity. Discrete tokenizers naturally align with the autoregressive paradigm but still lag behind continuous ones, limiting their adoption in multimodal systems. To address this, we propose \textbf{SFTok}, a discrete tokenizer that incorporates a multi-step iterative mechanism for precise reconstruction. By integrating \textbf{self-forcing guided visual reconstruction} and \textbf{debias-and-fitting training strategy}, SFTok resolves the training-inference inconsistency in multi-step process, significantly enhancing image reconstruction quality. At a high compression rate of only 64 tokens per image, SFTok achieves state-of-the-art reconstruction quality on ImageNet (rFID = 1.21) and demonstrates exceptional performance in class-to-image generation tasks (gFID = 2.29).

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition