Dataset Pruning in RecSys and ML: Best Practice or Mal-Practice?
By: Leonie Winter
Potential Business Impact:
Makes movie suggestions more accurate by using more data.
Offline evaluations in recommender system research depend heavily on datasets, many of which are pruned, such as the widely used MovieLens collections. This thesis examines the impact of data pruning - specifically, removing users with fewer than a specified number of interactions - on both dataset characteristics and algorithm performance. Five benchmark datasets were analysed in both their unpruned form and at five successive pruning levels (5, 10, 20, 50, 100). For each coreset, we examined structural and distributional characteristics and trained and tested eleven representative algorithms. To further assess if pruned datasets lead to artificially inflated performance results, we also evaluated models trained on the pruned train sets but tested on unpruned data. Results show that commonly applied core pruning can be highly selective, leaving as little as 2% of the original users in some datasets. Traditional algorithms achieved higher nDCG@10 scores when both training and testing on pruned data; however, this advantage largely disappeared when evaluated on unpruned test sets. Across all algorithms, performance declined with increasing pruning levels when tested on unpruned data, highlighting the impact of dataset reduction on the performance of recommender algorithms.
Similar Papers
UNSEEN: Enhancing Dataset Pruning from a Generalization Perspective
CV and Pattern Recognition
Makes computer learning faster by picking important data.
Investigating Data Pruning for Pretraining Biological Foundation Models at Scale
Machine Learning (CS)
Makes big AI models for biology much smaller.
Effective Data Pruning through Score Extrapolation
Machine Learning (CS)
Trains smart programs faster with less data.