Do Generalisation Results Generalise?
By: Matteo Boglioni , Andrea Sgobbi , Gabriel Tavernini and more
Potential Business Impact:
Tests if AI learns the same way everywhere.
A large language model's (LLM's) out-of-distribution (OOD) generalisation ability is crucial to its deployment. Previous work assessing LLMs' generalisation performance, however, typically focuses on a single out-of-distribution dataset. This approach may fail to precisely evaluate the capabilities of the model, as the data shifts encountered once a model is deployed are much more diverse. In this work, we investigate whether OOD generalisation results generalise. More specifically, we evaluate a model's performance across multiple OOD testsets throughout a finetuning run; we then evaluate the partial correlation of performances across these testsets, regressing out in-domain performance. This allows us to assess how correlated are generalisation performances once in-domain performance is controlled for. Analysing OLMo2 and OPT, we observe no overarching trend in generalisation results: the existence of a positive or negative correlation between any two OOD testsets depends strongly on the specific choice of model analysed.
Similar Papers
Revisiting Generalization Across Difficulty Levels: It's Not So Easy
Computation and Language
Teaches computers to learn from easy and hard lessons.
Generalizability of Large Language Model-Based Agents: A Comprehensive Survey
Artificial Intelligence
Helps AI agents work well in new situations.
Compute-Optimal LLMs Provably Generalize Better With Scale
Machine Learning (CS)
Makes AI smarter by understanding more words.