Maximum Risk Minimization with Random Forests
By: Francesco Freni , Anya Fries , Linus Kühne and more
Potential Business Impact:
Helps computers learn from different situations.
We consider a regression setting where observations are collected in different environments modeled by different data distributions. The field of out-of-distribution (OOD) generalization aims to design methods that generalize better to test environments whose distributions differ from those observed during training. One line of such works has proposed to minimize the maximum risk across environments, a principle that we refer to as MaxRM (Maximum Risk Minimization). In this work, we introduce variants of random forests based on the principle of MaxRM. We provide computationally efficient algorithms and prove statistical consistency for our primary method. Our proposed method can be used with each of the following three risks: the mean squared error, the negative reward (which relates to the explained variance), and the regret (which quantifies the excess risk relative to the best predictor). For MaxRM with regret as the risk, we prove a novel out-of-sample guarantee over unseen test distributions. Finally, we evaluate the proposed methods on both simulated and real-world data.
Similar Papers
Deceptive Risk Minimization: Out-of-Distribution Generalization by Deceiving Distribution Shift Detectors
Machine Learning (CS)
Teaches computers to learn what's real, not fake.
Robust Minimax Boosting with Performance Guarantees
Machine Learning (Stat)
Fixes computer mistakes from bad information.
Online Policy Learning via a Self-Normalized Maximal Inequality
Machine Learning (Stat)
Helps computers learn better from changing information.