Score: 1

The Impact of Bootstrap Sampling Rate on Random Forest Performance in Regression Tasks

Published: November 17, 2025 | arXiv ID: 2511.13952v1

By: Michał Iwaniuk , Mateusz Jarosz , Bartłomiej Borycki and more

Potential Business Impact:

Makes computer learning models work better.

Business Areas:
A/B Testing Data and Analytics

Random Forests (RFs) typically train each tree on a bootstrap sample of the same size as the training set, i.e., bootstrap rate (BR) equals 1.0. We systematically examine how varying BR from 0.2 to 5.0 affects RF performance across 39 heterogeneous regression datasets and 16 RF configurations, evaluating with repeated two-fold cross-validation and mean squared error. Our results demonstrate that tuning the BR can yield significant improvements over the default: the best setup relied on BR \leq 1.0 for 24 datasets, BR > 1.0 for 15, and BR = 1.0 was optimal in 4 cases only. We establish a link between dataset characteristics and the preferred BR: datasets with strong global feature-target relationships favor higher BRs, while those with higher local target variance benefit from lower BRs. To further investigate this relationship, we conducted experiments on synthetic datasets with controlled noise levels. These experiments reproduce the observed bias-variance trade-off: in low-noise scenarios, higher BRs effectively reduce model bias, whereas in high-noise settings, lower BRs help reduce model variance. Overall, BR is an influential hyperparameter that should be tuned to optimize RF regression models.

Country of Origin
🇵🇱 Poland

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Machine Learning (CS)