Revisiting Generalization Across Difficulty Levels: It's Not So Easy
By: Yeganeh Kordi , Nihal V. Nayak , Max Zuo and more
Potential Business Impact:
Teaches computers to learn from easy and hard lessons.
We investigate how well large language models (LLMs) generalize across different task difficulties, a key question for effective data curation and evaluation. Existing research is mixed regarding whether training on easier or harder data leads to better results, and whether those gains come on easier or harder test data. We address this question by conducting a systematic evaluation of LLMs' generalization across models, datasets, and fine-grained groups of example difficulty. We rank examples in six datasets using the outputs of thousands of different LLMs and Item Response Theory (IRT), a well-established difficulty metric in educational testing. Unlike prior work, our difficulty ratings are therefore determined solely by the abilities of many different LLMs, excluding human opinions of difficulty. With a more objective, larger-scale, and finer-grained analysis, we show that cross-difficulty generalization is often limited; training on either easy or hard data cannot achieve consistent improvements across the full range of difficulties. These results show the importance of having a range of difficulties in both training and evaluation data for LLMs, and that taking shortcuts with respect to difficulty is risky.
Similar Papers
A Shared Geometry of Difficulty in Multilingual Language Models
Computation and Language
Helps computers understand how hard problems are.
Estimating problem difficulty without ground truth using Large Language Model comparisons
Machine Learning (CS)
Helps AI learn harder problems by guessing difficulty.
Take Out Your Calculators: Estimating the Real Difficulty of Question Items with LLM Student Simulations
Computation and Language
Computers guess how hard math problems are.