Token Sugar: Making Source Code Sweeter for LLMs through Token-Efficient Shorthand
By: Zhensu Sun , Chengran Yang , Xiaoning Du and more
Potential Business Impact:
Makes computer code shorter, faster, and cheaper.
Large language models (LLMs) have shown exceptional performance in code generation and understanding tasks, yet their high computational costs hinder broader adoption. One important factor is the inherent verbosity of programming languages, such as unnecessary formatting elements and lengthy boilerplate code. This leads to inflated token counts in both input and generated outputs, which increases inference costs and slows down the generation process. Prior work improves this through simplifying programming language grammar, reducing token usage across both code understanding and generation tasks. However, it is confined to syntactic transformations, leaving significant opportunities for token reduction unrealized at the semantic level. In this work, we propose Token Sugar, a concept that replaces frequent and verbose code patterns with reversible, token-efficient shorthand in the source code. To realize this concept in practice, we designed a systematic solution that mines high-frequency, token-heavy patterns from a code corpus, maps each to a unique shorthand, and integrates them into LLM pretraining via code transformation. With this solution, we obtain 799 (code pattern, shorthand) pairs, which can reduce up to 15.1% token count in the source code and is complementary to existing syntax-focused methods. We further trained three widely used LLMs on Token Sugar-augmented data. Experimental results show that these models not only achieve significant token savings (up to 11.2% reduction) during generation but also maintain near-identical Pass@1 scores compared to baselines trained on unprocessed code.
Similar Papers
Cost-Efficient Long Code Translation using LLMs while Leveraging Identifier Replacements
Software Engineering
Translates long computer code accurately and faster.
TokenSqueeze: Performance-Preserving Compression for Reasoning LLMs
Machine Learning (CS)
Makes smart computers think faster, using fewer words.
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code
Machine Learning (CS)
Makes AI better at writing code and solving math.