Beyond Random Sampling: Instance Quality-Based Data Partitioning via Item Response Theory
By: Lucas Cardoso , Vitor Santos , José Ribeiro Filho and more
Potential Business Impact:
Makes computer learning fairer and more accurate.
Robust validation of Machine Learning (ML) models is essential, but traditional data partitioning approaches often ignore the intrinsic quality of each instance. This study proposes the use of Item Response Theory (IRT) parameters to characterize and guide the partitioning of datasets in the model validation stage. The impact of IRT-informed partitioning strategies on the performance of several ML models in four tabular datasets was evaluated. The results obtained demonstrate that IRT reveals an inherent heterogeneity of the instances and highlights the existence of informative subgroups of instances within the same dataset. Based on IRT, balanced partitions were created that consistently help to better understand the tradeoff between bias and variance of the models. In addition, the guessing parameter proved to be a determining factor: training with high-guessing instances can significantly impair model performance and resulted in cases with accuracy below 50%, while other partitions reached more than 70% in the same dataset.
Similar Papers
Enhancing Classifier Evaluation: A Fairer Benchmarking Strategy Based on Ability and Robustness
Machine Learning (CS)
Finds the best computer learning programs.
A Dynamic, Ordinal Gaussian Process Item Response Theoretic Model
Methodology
Tracks how people's opinions change over time.
RIDE: Difficulty Evolving Perturbation with Item Response Theory for Mathematical Reasoning
Computation and Language
Tests if computers *really* understand math.