Zero-Shot Performance Prediction for Probabilistic Scaling Laws
By: Viktoria Schram , Markus Hiller , Daniel Beck and more
Potential Business Impact:
Predicts how well AI learns to save money.
The prediction of learning curves for Natural Language Processing (NLP) models enables informed decision-making to meet specific performance objectives, while reducing computational overhead and lowering the costs associated with dataset acquisition and curation. In this work, we formulate the prediction task as a multitask learning problem, where each task's data is modelled as being organized within a two-layer hierarchy. To model the shared information and dependencies across tasks and hierarchical levels, we employ latent variable multi-output Gaussian Processes, enabling to account for task correlations and supporting zero-shot prediction of learning curves (LCs). We demonstrate that this approach facilitates the development of probabilistic scaling laws at lower costs. Applying an active learning strategy, LCs can be queried to reduce predictive uncertainty and provide predictions close to ground truth scaling laws. We validate our framework on three small-scale NLP datasets with up to $30$ LCs. These are obtained from nanoGPT models, from bilingual translation using mBART and Transformer models, and from multilingual translation using M2M100 models of varying sizes.
Similar Papers
Predicting Language Models' Success at Zero-Shot Probabilistic Prediction
Machine Learning (CS)
Helps computers guess if they'll do a good job.
Large Language Model Scaling Laws for Neural Quantum States in Quantum Chemistry
Machine Learning (CS)
Makes quantum computers learn faster and better.
Learning curves theory for hierarchically compositional data with power-law distributed features
Machine Learning (Stat)
Makes AI learn faster by understanding how things are built.