Learning Decision Trees as Amortized Structure Inference
By: Mohammed Mahfoud , Ghait Boukachab , Michał Koziarski and more
Potential Business Impact:
Builds smarter computer programs that learn from data.
Building predictive models for tabular data presents fundamental challenges, notably in scaling consistently, i.e., more resources translating to better performance, and generalizing systematically beyond the training data distribution. Designing decision tree models remains especially challenging given the intractably large search space, and most existing methods rely on greedy heuristics, while deep learning inductive biases expect a temporal or spatial structure not naturally present in tabular data. We propose a hybrid amortized structure inference approach to learn predictive decision tree ensembles given data, formulating decision tree construction as a sequential planning problem. We train a deep reinforcement learning (GFlowNet) policy to solve this problem, yielding a generative model that samples decision trees from the Bayesian posterior. We show that our approach, DT-GFN, outperforms state-of-the-art decision tree and deep learning methods on standard classification benchmarks derived from real-world data, robustness to distribution shifts, and anomaly detection, all while yielding interpretable models with shorter description lengths. Samples from the trained DT-GFN model can be ensembled to construct a random forest, and we further show that the performance of scales consistently in ensemble size, yielding ensembles of predictors that continue to generalize systematically.
Similar Papers
Improving Deep Tabular Learning
Machine Learning (CS)
Helps computers learn from messy data better.
Inductive inference of gradient-boosted decision trees on graphs for insurance fraud detection
Machine Learning (CS)
Finds fake insurance claims faster.
TABFAIRGDT: A Fast Fair Tabular Data Generator using Autoregressive Decision Trees
Machine Learning (CS)
Makes computer models fairer by fixing biased data.