Policy Trees for Prediction: Interpretable and Adaptive Model Selection for Machine Learning

Published: 10 Oct 2024, Last Modified: 03 Aug 2025RegML 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model Selection, Rejection Learning, Decision Trees, Optimization
TL;DR: Interpretable framework for adaptive model selection and rejection learning.
Abstract: As a multitude of capable machine learning (ML) models become widely available in forms such as open-source software and public APIs, central questions remain regarding their use in real-world applications, especially in high-stakes decision-making. Is there always one best model that should be used? When are the models likely to be error-prone? Should a black-box or interpretable model be used? In this work, we develop a prescriptive methodology to address these key questions, introducing a tree-based approach, __Optimal Predictive-Policy Trees__ (OP²T), that yields interpretable policies for adaptively selecting a predictive model or ensemble, along with a parameterized option to reject making a prediction. We base our methods on learning globally optimized prescriptive trees. Our approach enables interpretable and adaptive model selection and rejection while only assuming access to model outputs. Our approach works with structured and unstructured datasets by learning policies over different feature spaces, including the model outputs. We evaluate our approach on real-world datasets, including regression and classification tasks with structured and unstructured data. We demonstrate that our approach provides both strong performance against baseline methods while yielding insights that help answer critical questions about which models to use, and when.
Submission Number: 33
Loading