Can Out-of-Distribution Evaluations Uncover Reliance on Shortcuts? A Case Study in Question Answering
By: Michal Štefánik , Timothee Mickus , Marek Kadlčík and more
Potential Business Impact:
Tests if AI cheats by finding easy answers.
A majority of recent work in AI assesses models' generalization capabilities through the lens of performance on out-of-distribution (OOD) datasets. Despite their practicality, such evaluations build upon a strong assumption: that OOD evaluations can capture and reflect upon possible failures in a real-world deployment. In this work, we challenge this assumption and confront the results obtained from OOD evaluations with a set of specific failure modes documented in existing question-answering (QA) models, referred to as a reliance on spurious features or prediction shortcuts. We find that different datasets used for OOD evaluations in QA provide an estimate of models' robustness to shortcuts that have a vastly different quality, some largely under-performing even a simple, in-distribution evaluation. We partially attribute this to the observation that spurious shortcuts are shared across ID+OOD datasets, but also find cases where a dataset's quality for training and evaluation is largely disconnected. Our work underlines limitations of commonly-used OOD-based evaluations of generalization, and provides methodology and recommendations for evaluating generalization within and beyond QA more robustly.
Similar Papers
Aggregation Hides Out-of-Distribution Generalization Failures from Spurious Correlations
Machine Learning (CS)
Finds hidden computer mistakes in new situations.
Out-of-distribution generalisation is hard: evidence from ARC-like tasks
Machine Learning (CS)
Teaches computers to learn like humans.
ODP-Bench: Benchmarking Out-of-Distribution Performance Prediction
Machine Learning (CS)
Tests computer models on new, unseen data.