Hot-Start from Pixels: Low-Resolution Visual Tokens for Chinese Language Modeling
By: Shuyang Xiang, Hao Guan
Potential Business Impact:
Lets computers "see" Chinese words to understand them.
Large language models typically represent Chinese characters as discrete index-based tokens, largely ignoring their visual form. For logographic scripts, visual structure carries semantic and phonetic information, which may aid prediction. We investigate whether low-resolution visual inputs can serve as an alternative for character-level modeling. Instead of token IDs, our decoder receives grayscale images of individual characters, with resolutions as low as 8 x 8 pixels. Remarkably, these inputs achieve 39.2% accuracy, comparable to the index-based baseline of 39.1%. Such low-resource settings also exhibit a pronounced hot-start effect: by 0.4% of total training, accuracy reaches above 12%, while index-based models lag at below 6%. Overall, our results demonstrate that minimal visual structure can provide a robust and efficient signal for Chinese language modeling, offering an alternative perspective on character representation that complements traditional index-based approaches.
Similar Papers
Hot-Start from Pixels: Low-Resolution Visual Tokens for Chinese Language Modeling
CV and Pattern Recognition
Computers understand Chinese writing by looking at pictures.
HERO: Rethinking Visual Token Early Dropping in High-Resolution Large Vision-Language Models
CV and Pattern Recognition
Makes AI see details in pictures faster.
Overcoming Vocabulary Constraints with Pixel-level Fallback
Computation and Language
Helps computers understand any language, even new ones.