Score: 0

Do Generalisation Results Generalise?

Published: December 8, 2025 | arXiv ID: 2512.07832v1

By: Matteo Boglioni , Andrea Sgobbi , Gabriel Tavernini and more

Potential Business Impact:

Tests if AI learns the same way everywhere.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A large language model's (LLM's) out-of-distribution (OOD) generalisation ability is crucial to its deployment. Previous work assessing LLMs' generalisation performance, however, typically focuses on a single out-of-distribution dataset. This approach may fail to precisely evaluate the capabilities of the model, as the data shifts encountered once a model is deployed are much more diverse. In this work, we investigate whether OOD generalisation results generalise. More specifically, we evaluate a model's performance across multiple OOD testsets throughout a finetuning run; we then evaluate the partial correlation of performances across these testsets, regressing out in-domain performance. This allows us to assess how correlated are generalisation performances once in-domain performance is controlled for. Analysing OLMo2 and OPT, we observe no overarching trend in generalisation results: the existence of a positive or negative correlation between any two OOD testsets depends strongly on the specific choice of model analysed.

Country of Origin
🇨🇭 Switzerland

Page Count
18 pages

Category
Computer Science:
Computation and Language