SubTokenTest: A Practical Benchmark for Real-World Sub-token Understanding
By: Shuyang Hou, Yi Hu, Muhan Zhang
Recent advancements in large language models (LLMs) have significantly enhanced their reasoning capabilities. However, they continue to struggle with basic character-level tasks, such as counting letters in words, a problem rooted in their tokenization process. While existing benchmarks have highlighted this weakness through basic character operations, such failures are often dismissed due to lacking practical relevance. Yet, many real-world applications, such as navigating text-based maps or interpreting structured tables, rely heavily on precise sub-token understanding. In this regard, we introduce SubTokenTest, a comprehensive benchmark that assesses sub-token understanding through practical, utility-driven tasks. Our benchmark includes ten tasks across four domains and isolates tokenization-related failures by decoupling performance from complex reasoning. We provide a comprehensive evaluation of nine advanced LLMs. Additionally, we investigate the impact of test-time scaling on sub-token reasoning and explore how character-level information is encoded within the hidden states.
Similar Papers
CharBench: Evaluating the Role of Tokenization in Character-Level Tasks
Computation and Language
Helps computers understand letters inside words better.
CharBench: Evaluating the Role of Tokenization in Character-Level Tasks
Computation and Language
Helps computers understand words letter by letter.
The Strawberry Problem: Emergence of Character-level Understanding in Tokenized Language Models
Computation and Language
Helps computers understand letters, not just words.