Learning Shrinks the Hard Tail: Training-Dependent Inference Scaling in a Solvable Linear Model
By: Noam Levi
Potential Business Impact:
Helps AI learn harder problems by fixing mistakes.
We analyze neural scaling laws in a solvable model of last-layer fine-tuning where targets have intrinsic, instance-heterogeneous difficulty. In our Latent Instance Difficulty (LID) model, each input's target variance is governed by a latent ``precision'' drawn from a heavy-tailed distribution. While generalization loss recovers standard scaling laws, our main contribution connects this to inference. The pass@$k$ failure rate exhibits a power-law decay, $k^{-β_\text{eff}}$, but the observed exponent $β_\text{eff}$ is training-dependent. It grows with sample size $N$ before saturating at an intrinsic limit $β$ set by the difficulty distribution's tail. This coupling reveals that learning shrinks the ``hard tail'' of the error distribution: improvements in the model's generalization error steepen the pass@$k$ curve until irreducible target variance dominates. The LID model yields testable, closed-form predictions for this behavior, including a compute-allocation rule that favors training before saturation and inference attempts after. We validate these predictions in simulations and in two real-data proxies: CIFAR-10H (human-label variance) and a maths teacher-student distillation task.
Similar Papers
Demystifying LLM-as-a-Judge: Analytically Tractable Model for Inference-Time Scaling
Machine Learning (CS)
Makes AI better by trying more answers.
Unifying Learning Dynamics and Generalization in Transformers Scaling Law
Machine Learning (CS)
Makes AI learn better with more computer power.
Scaling Laws are Redundancy Laws
Machine Learning (CS)
Explains why bigger computer brains learn faster.