Keywords: Tabular data, Decision trees, Random forests, Amortized inference, Generative Models, Deep RL, GFlowNets.
TL;DR: Learning decision trees and random forests as sequential planning over decision rules, using deep RL and Bayesian inference.
Abstract: Building predictive models for tabular data presents fundamental challenges, notably in scaling consistently, *i.e.*, more resources translating to better performance, and generalizing systematically beyond the training data distribution. Designing decision tree models remains especially challenging given the intractably large search space, and most existing methods rely on greedy heuristics, while deep learning inductive biases expect a temporal or spatial structure not naturally present in tabular data. We propose a hybrid *amortized structure inference* approach to learn predictive decision tree ensembles given data, formulating decision tree construction as a *sequential planning* problem. We train a deep reinforcement learning (GFlowNet) policy to solve this problem, yielding a generative model that samples decision trees from the Bayesian posterior. We show that our approach, DT-GFN, outperforms state-of-the-art decision tree and deep learning methods on standard classification benchmarks derived from real-world data, robustness to distribution shifts, and anomaly detection, all while yielding interpretable models with shorter description lengths. Samples from the trained DT-GFN model can be ensembled to construct a random forest, and we further show that the performance of scales consistently in ensemble size, yielding ensembles of predictors that continue to generalize systematically.
Code is available at [this anonymous link](https://anonymous.4open.science/r/DT-GFN-1FBA/README.md).
Submission Number: 58
Loading