ODP-Bench: Benchmarking Out-of-Distribution Performance Prediction
By: Han Yu , Kehan Li , Dongbai Li and more
Potential Business Impact:
Tests computer models on new, unseen data.
Recently, there has been gradually more attention paid to Out-of-Distribution (OOD) performance prediction, whose goal is to predict the performance of trained models on unlabeled OOD test datasets, so that we could better leverage and deploy off-the-shelf trained models in risk-sensitive scenarios. Although progress has been made in this area, evaluation protocols in previous literature are inconsistent, and most works cover only a limited number of real-world OOD datasets and types of distribution shifts. To provide convenient and fair comparisons for various algorithms, we propose Out-of-Distribution Performance Prediction Benchmark (ODP-Bench), a comprehensive benchmark that includes most commonly used OOD datasets and existing practical performance prediction algorithms. We provide our trained models as a testbench for future researchers, thus guaranteeing the consistency of comparison and avoiding the burden of repeating the model training process. Furthermore, we also conduct in-depth experimental analyses to better understand their capability boundary.
Similar Papers
BOOM: Benchmarking Out-Of-distribution Molecular Property Predictions of Machine Learning Models
Machine Learning (CS)
Finds new medicines by predicting molecule behavior.
phepy: Visual Benchmarks and Improvements for Out-of-Distribution Detectors
Machine Learning (CS)
Helps computers know when they don't know.
General OOD Detection via Model-aware and Subspace-aware Variable Priority
Machine Learning (Stat)
Finds when computer predictions are wrong.