Scaling Laws for Code: A More Data-Hungry Regime
By: Xianzhen Luo , Wenzhen Zheng , Qingfu Zhu and more
Potential Business Impact:
Makes computer code smarter with more data.
Code Large Language Models (LLMs) are revolutionizing software engineering. However, scaling laws that guide the efficient training are predominantly analyzed on Natural Language (NL). Given the fundamental differences like strict syntax between code and NL, it is unclear whether these laws are directly applicable to code. To address this gap, we conduct the first large-scale empirical study of scaling laws for code, comprising 117 experimental runs with model sizes from 0.2B to 3.8B and training tokens from 2B to 128B. We fit the Chinchilla law and the Farsser law. First, the results show that the more expressive Farseer law offers greater accuracy. Second, the analysis reveals that Code LLMs scale effectively with model size. Crucially, code represents a more data-hungry regime, requiring a substantially higher data-to-parameter ratio than NL. Finally, two additional sets of experiments on code-NL mixtures show that NL benefits resource-constrained scenarios, but becomes a detriment at higher compute budgets.
Similar Papers
Predictable Scale: Part II, Farseer: A Refined Scaling Law in Large Language Models
Machine Learning (CS)
Predicts how big AI models will work before training.
Large Language Model Scaling Laws for Neural Quantum States in Quantum Chemistry
Machine Learning (CS)
Makes quantum computers learn faster and better.
Relative Scaling Laws for LLMs
Computation and Language
Shows how AI gets better, but not equally.