Fast TreeSHAP: Accelerating SHAP Value Computation for TreesDownload PDF

Published: 17 Oct 2021, Last Modified: 12 Mar 2024XAI 4 Debugging Workshop @ NEURIPS 2021 PosterReaders: Everyone
Keywords: TreeSHAP, time complexity, pre-computation
TL;DR: Fast TreeSHAP v1 and v2 are developed to improve the computational efficiency of TreeSHAP for large datasets, and in practice v1 is 1.5x faster, and v2 is 2.5x faster at the cost of a slightly higher memory usage.
Abstract: SHAP (SHapley Additive exPlanation) values are one of the leading tools for interpreting machine learning models, with strong theoretical guarantees (consistency, local accuracy) and a wide availability of implementations and use cases. Even though computing SHAP values takes exponential time in general, TreeSHAP takes polynomial time on tree-based models. While the speedup is significant, TreeSHAP can still dominate the computation time of industry-level machine learning solutions on datasets with millions or more entries, causing delays in post-hoc model diagnosis and interpretation service. In this paper we present two new algorithms, Fast TreeSHAP v1 and v2, designed to improve the computational efficiency of TreeSHAP for large datasets. We empirically find that Fast TreeSHAP v1 is 1.5x faster than TreeSHAP while keeping the memory cost unchanged. Similarly, Fast TreeSHAP v2 is 2.5x faster than TreeSHAP, at the cost of a slightly higher memory usage, thanks to the pre-computation of expensive TreeSHAP steps. We also show that Fast TreeSHAP v2 is well-suited for multi-time model interpretations, resulting in as high as 3x faster explanation of newly incoming samples. The link to the code repository to replicate the results in this paper is https://drive.google.com/drive/u/4/folders/1P0V0Vj42K04QcT39wmbMM4Y6joyc67ZQ.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2109.09847/code)
0 Replies

Loading