Compressed code: the hidden effects of quantization and distillation on programming tokens
By: Viacheslav Siniaev, Iaroslav Chelombitko, Aleksey Komissarov
Potential Business Impact:
Makes AI write better computer code, even when small.
Large Language Models (LLMs) have demonstrated exceptional code generation capabilities, yet their token-level mechanisms remain underexplored, particularly in compressed models. Through systematic analysis of programming language token representations, we characterize how programming languages are encoded in LLM tokenizers by analyzing their vocabulary distribution and keyword coverage patterns. We introduce a novel cold-start probability analysis method that provides insights into model behavior without requiring explicit prompts. Additionally, we present a comprehensive evaluation of how different model optimization techniques - including quantization, distillation, model scaling, and task-specific fine-tuning - affect token-level representations and code generation quality. Our experiments, supported by comprehensive probability distribution analysis and evaluation metrics, reveal critical insights into token-level behavior and provide empirically-validated guidelines for maintaining code generation quality under various optimization constraints. These findings advance both theoretical understanding of LLM code generation and practical implementation of optimized models in production environments.
Similar Papers
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.
Efficient AI in Practice: Training and Deployment of Efficient LLMs for Industry Applications
Information Retrieval
Makes small AI models as smart as big ones.
Scaling Down, Serving Fast: Compressing and Deploying Efficient LLMs for Recommendation Systems
Information Retrieval
Makes small AI models work like big ones.