A Latent Variable Framework for Scaling Laws in Large Language Models
By: Peiyao Cai , Chengyu Cui , Felipe Maia Polo and more
Potential Business Impact:
Helps predict how smart computer programs will get.
We propose a statistical framework built on latent variable modeling for scaling laws of large language models (LLMs). Our work is motivated by the rapid emergence of numerous new LLM families with distinct architectures and training strategies, evaluated on an increasing number of benchmarks. This heterogeneity makes a single global scaling curve inadequate for capturing how performance varies across families and benchmarks. To address this, we propose a latent variable modeling framework in which each LLM family is associated with a latent variable that captures the common underlying features in that family. An LLM's performance on different benchmarks is then driven by its latent skills, which are jointly determined by the latent variable and the model's own observable features. We develop an estimation procedure for this latent variable model and establish its statistical properties. We also design efficient numerical algorithms that support estimation and various downstream tasks. Empirically, we evaluate the approach on 12 widely used benchmarks from the Open LLM Leaderboard (v1/v2).
Similar Papers
Uncovering Scaling Laws for Large Language Models via Inverse Problems
Machine Learning (CS)
Finds best ways to build smart computer programs cheaper.
Scaling Law Phenomena Across Regression Paradigms: Multiple and Kernel Approaches
Machine Learning (CS)
Makes AI smarter by understanding how to train them.
Scaling Laws for Code: A More Data-Hungry Regime
Computation and Language
Makes computer code smarter with more data.