Generative Design of Decision Tree Policies for Reinforcement Learning

Published: 17 Jun 2024, Last Modified: 16 Jul 20242nd SPIGM @ ICML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Discrete Optimization, Hybrid Optimization, Decision Trees, Deep Symbolic Optimization, Reinforcement Learning, Generative Design, Interpretable Machine Learning
TL;DR: We propose a novel discrete-continuos generative design approach for decision trees in RL, outperforming state-of-the-art methods.
Abstract: Decision trees are an attractive choice for modeling policies in control environments due to their interpretability, conciseness, and ease of implementation. However, generating performant decision trees in this context has several challenges, including the hybrid discrete-continuous nature of the search space, the variable-length nature of the trees, the existence of parent-dependent constraints, and the high computational cost of evaluating the objective function in reinforcement learning settings. Traditional methods, such as Mixed Integer Programming or Mixed Bayesian Optimization, are unsuitable for these problems due to the variable-length constrained search space and the high number of objective function evaluations required. To address these challenges, we propose to extend approaches in the field of neural combinatorial optimization to handle the hybrid discrete-continuous optimization problem of generating decision trees. Our approach demonstrates significant improvements in performance and sample efficiency over the state-of-the-art methods for interpretable reinforcement learning with decision trees.
Submission Number: 49
Loading