Adaptive Forests For Classification
By: Dimitris Bertsimas, Yubing Cui
Potential Business Impact:
Makes computer predictions smarter by changing how it learns.
Random Forests (RF) and Extreme Gradient Boosting (XGBoost) are two of the most widely used and highly performing classification and regression models. They aggregate equally weighted CART trees, generated randomly in RF or sequentially in XGBoost. In this paper, we propose Adaptive Forests (AF), a novel approach that adaptively selects the weights of the underlying CART models. AF combines (a) the Optimal Predictive-Policy Trees (OP2T) framework to prescribe tailored, input-dependent unequal weights to trees and (b) Mixed Integer Optimization (MIO) to refine weight candidates dynamically, enhancing overall performance. We demonstrate that AF consistently outperforms RF, XGBoost, and other weighted RF in binary and multi-class classification problems over 20+ real-world datasets.
Similar Papers
Lassoed Forests: Random Forests with Adaptive Lasso Post-selection
Machine Learning (Stat)
Improves computer predictions by combining methods.
Lassoed Forests: Random Forests with Adaptive Lasso Post-selection
Machine Learning (Stat)
Improves computer predictions by combining methods.
Dynamic Features Adaptation in Networking: Toward Flexible training and Explainable inference
Machine Learning (CS)
AI learns new network tricks faster and explains them.