Tractable Shapley Values and Interactions via Tensor Networks
TL;DR: Turn SHAP/SII (and collusion) into a few tensor-network probes + Vandermonde. Same accuracy, 25–1000× faster than KernelSHAP in local/amortized.
Abstract: We show how to replace the $O(2^n)$ coalition enumeration over $n$ features behind Shapley values and Shapley-style interaction indices with a few-evaluation scheme on a tensor-network (TN) surrogate: TN-SHAP. The key idea is to represent a predictor’s local behavior as a factorized multilinear map, so that coalitional quantities become linear probes of a coefficient tensor. TN-SHAP replaces exhaustive coalition sweeps with just a small number of targeted evaluations to extract order$-k$ Shapley interactions.
In particular, both order-1 (single-feature) and order-2 (pairwise) computations have cost $O\!\big(n\,\mathrm{poly}(\chi) + n^2\big)$, where $\chi$ is the TN’s maximal cut rank.
We provide theoretical guarantees on the approximation error and tractability of TN-SHAP.
On UCI datasets, our method matches enumeration on the fitted surrogate while reducing evaluation by orders of magnitude and achieves \textbf{25--1000$\times$} wall-clock speedups over KernelSHAP-IQ at comparable accuracy, while amortizing training across local cohorts.
Submission Number: 1353
Loading