Unique Rashomon Sets for Robust Active Learning
By: Simon Nguyen, Kentaro Hoffman, Tyler McCormick
Potential Business Impact:
Teaches computers to learn better with less data.
Collecting labeled data for machine learning models is often expensive and time-consuming. Active learning addresses this challenge by selectively labeling the most informative observations, but when initial labeled data is limited, it becomes difficult to distinguish genuinely informative points from those appearing uncertain primarily due to noise. Ensemble methods like random forests are a powerful approach to quantifying this uncertainty but do so by aggregating all models indiscriminately. This includes poor performing models and redundant models, a problem that worsens in the presence of noisy data. We introduce UNique Rashomon Ensembled Active Learning (UNREAL), which selectively ensembles only distinct models from the Rashomon set, which is the set of nearly optimal models. Restricting ensemble membership to high-performing models with different explanations helps distinguish genuine uncertainty from noise-induced variation. We show that UNREAL achieves faster theoretical convergence rates than traditional active learning approaches and demonstrates empirical improvements of up to 20% in predictive accuracy across five benchmark datasets, while simultaneously enhancing model interpretability.
Similar Papers
"A 6 or a 9?": Ensemble Learning Through the Multiplicity of Performant Models and Explanations
Machine Learning (CS)
Finds best computer answers from many good ones.
DUAL: Diversity and Uncertainty Active Learning for Text Summarization
Computation and Language
Teaches computers to summarize text better with less data.
Allocation Multiplicity: Evaluating the Promises of the Rashomon Set
Computers and Society
Helps computers make fairer choices when resources are scarce.