Despite the widespread adoption of ensemble models, it is widely acknowledged within the ML community that they offer limited interpretability. For instance, while a single decision tree is considered interpretable, ensembles of decision trees (e.g., boosted-trees) are usually regarded as black-boxes. Although this reduced interpretability is widely acknowledged, the topic has received only limited attention from a theoretical and mathematical viewpoint. In this work, we provide an elaborate analysis of the interpretability of ensemble models through the lens of computational complexity theory. In a nutshell, we explore different forms of explanations, and analyze whether obtaining explanations for ensembles is strictly computationally less tractable than for their constituent base models. We show that this is indeed the case for ensembles that consist of interpretable models, such as decision trees or linear models; but this is not the case for ensembles consisting of more complex models, such as neural networks. Next, we perform a fine-grained analysis using parameterized complexity to measure the impact of different problem parameters on an ensemble's interpretability. Our findings reveal that even if we shrink the size of all base models in an ensemble substantially, the ensemble as a whole remains intractable to interpret. However, an analysis of the number of base models yields a surprising dynamic --- while ensembles consisting of a limited number of decision trees can be interpreted efficiently, ensembles that consist of a small (even constant) number of linear models are computationally intractable to interpret.
Keywords: explainable AI, XAI, explainability
TL;DR: A framework for analyzing the interpretability of ensemble models using computational complexity.
Abstract:
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3050
Loading