Theoretical Foundations of Scaling Law in Familial Models
By: Huan Song , Qingfei Zhao , Ting Long and more
Potential Business Impact:
Train one AI, use many versions.
Neural scaling laws have become foundational for optimizing large language model (LLM) training, yet they typically assume a single dense model output. This limitation effectively overlooks "Familial models, a transformative paradigm essential for realizing ubiquitous intelligence across heterogeneous device-edge-cloud hierarchies. Transcending static architectures, familial models integrate early exits with relay-style inference to spawn G deployable sub-models from a single shared backbone. In this work, we theoretically and empirically extend the scaling law to capture this "one-run, many-models" paradigm by introducing Granularity (G) as a fundamental scaling variable alongside model size (N) and training tokens (D). To rigorously quantify this relationship, we propose a unified functional form L(N, D, G) and parameterize it using large-scale empirical runs. Specifically, we employ a rigorous IsoFLOP experimental design to strictly isolate architectural impact from computational scale. Across fixed budgets, we systematically sweep model sizes (N) and granularities (G) while dynamically adjusting tokens (D). This approach effectively decouples the marginal cost of granularity from the benefits of scale, ensuring high-fidelity parameterization of our unified scaling law. Our results reveal that the granularity penalty follows a multiplicative power law with an extremely small exponent. Theoretically, this bridges fixed-compute training with dynamic architectures. Practically, it validates the "train once, deploy many" paradigm, demonstrating that deployment flexibility is achievable without compromising the compute-optimality of dense baselines.
Similar Papers
Generalizing Scaling Laws for Dense and Sparse Large Language Models
Machine Learning (CS)
Predicts computer brain size and needs better.
Generalizing Scaling Laws for Dense and Sparse Large Language Models
Machine Learning (CS)
Makes big computer brains train faster, cheaper.
Unifying Learning Dynamics and Generalization in Transformers Scaling Law
Machine Learning (CS)
Makes AI learn better with more computer power.