An Empirical Study of the Effect of Background Data Size on the Stability of SHapley Additive exPlanations (SHAP) for Deep Learning ModelsDownload PDF

01 Mar 2023 (modified: 03 Nov 2024)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Interpretable Machine Learning, SHapley Additive exPlanations (SHAP), Background Data
Abstract: SHapley Additive exPlanations (SHAP) is a popular method that requires a background dataset in uncovering the deduction mechanism of artificial neural networks (ANNs). Generally, a background dataset consists of instances randomly sampled from the training dataset. However, the sampling size and its effect on SHAP remain unexplored. In this work, we empirically explored the effect and illustrated several tips when applying SHAP. The code is publicly accessible at https://github.com/Han-Yuan-Med/shap-bg-size.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/an-empirical-study-of-the-effect-of/code)
10 Replies

Loading