Improving the Usefulness of Decision Trees as Explanations

15 Apr 2026 (modified: 26 Apr 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In classification with tabular data, one often utilizes tree-based models. Those can be competitive with deep neural networks on tabular data and, under some conditions, explainable. The explainability depends on the tree’s depth and the accuracy of each leaf. Decision trees containing leaves with unbalanced accuracy can provide misleading explanations. Low-accuracy leaves provide less useful explanations to the individuals they classify. Here, we train a shallow tree with the objective of minimizing the maximum misclassification error across each leaf node. The shallow tree provides a more useful global explanation, while its overall statistical performance can become comparable to that of state-of-the-art methods by extending the leaves with additional models.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Magda_Gregorova2
Submission Number: 8446
Loading