The Impact of Bootstrap Sampling Rate on Random Forest Performance in Regression Tasks
By: Michał Iwaniuk , Mateusz Jarosz , Bartłomiej Borycki and more
Potential Business Impact:
Makes computer learning models work better.
Random Forests (RFs) typically train each tree on a bootstrap sample of the same size as the training set, i.e., bootstrap rate (BR) equals 1.0. We systematically examine how varying BR from 0.2 to 5.0 affects RF performance across 39 heterogeneous regression datasets and 16 RF configurations, evaluating with repeated two-fold cross-validation and mean squared error. Our results demonstrate that tuning the BR can yield significant improvements over the default: the best setup relied on BR \leq 1.0 for 24 datasets, BR > 1.0 for 15, and BR = 1.0 was optimal in 4 cases only. We establish a link between dataset characteristics and the preferred BR: datasets with strong global feature-target relationships favor higher BRs, while those with higher local target variance benefit from lower BRs. To further investigate this relationship, we conducted experiments on synthetic datasets with controlled noise levels. These experiments reproduce the observed bias-variance trade-off: in low-noise scenarios, higher BRs effectively reduce model bias, whereas in high-noise settings, lower BRs help reduce model variance. Overall, BR is an influential hyperparameter that should be tuned to optimize RF regression models.
Similar Papers
Security Bug Report Prediction Within and Across Projects: A Comparative Study of BERT and Random Forest
Cryptography and Security
Finds security problems in computer code faster.
When do Random Forests work?
Machine Learning (Stat)
Makes computer learning better with messy data.
Adjusted Random Effect Block Bootstraps for Highly Unbalanced Clustered Data
Methodology
Fixes math for uneven groups of data.