The Strawberry Problem: Emergence of Character-level Understanding in Tokenized Language Models
By: Adrian Cosma , Stefan Ruseti , Emilian Radoi and more
Potential Business Impact:
Helps computers understand letters, not just words.
Despite their remarkable progress across diverse domains, Large Language Models (LLMs) consistently fail at simple character-level tasks, such as counting letters in words, due to a fundamental limitation: tokenization. In this work, we frame this limitation as a problem of low mutual information and analyze it in terms of concept emergence. Using a suite of 19 synthetic tasks that isolate character-level reasoning in a controlled setting, we show that such capabilities emerge suddenly and only late in training. We find that percolation-based models of concept emergence explain these patterns, suggesting that learning character composition is not fundamentally different from learning commonsense knowledge. To address this bottleneck, we propose a lightweight architectural modification that significantly improves character-level reasoning while preserving the inductive advantages of subword models. Together, our results bridge low-level perceptual gaps in tokenized LMs and provide a principled framework for understanding and mitigating their structural blind spots. We make our code publicly available.
Similar Papers
Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters
Computation and Language
Computers learn to understand letters better.
CharBench: Evaluating the Role of Tokenization in Character-Level Tasks
Computation and Language
Helps computers understand letters inside words better.
CharBench: Evaluating the Role of Tokenization in Character-Level Tasks
Computation and Language
Helps computers understand words letter by letter.