L-Shapley and C-Shapley: Efficient Model Interpretation for Structured DataDownload PDF

Published: 21 Dec 2018, Last Modified: 05 May 2023ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Instancewise feature scoring is a method for model interpretation, which yields, for each test instance, a vector of importance scores associated with features. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring on black-box models. We establish the relationship of our methods to the Shapley value and a closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods using both quantitative metrics and human evaluation.
Keywords: Model Interpretation, Feature Selection
TL;DR: We develop two linear-complexity algorithms for model-agnostic model interpretation based on the Shapley value, in the settings where the contribution of features to the target is well-approximated by a graph-structured factorization.
Code: [![github](/images/github_icon.svg) Jianbo-Lab/LCShapley](https://github.com/Jianbo-Lab/LCShapley)
Data: [AG News](https://paperswithcode.com/dataset/ag-news), [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [IMDb Movie Reviews](https://paperswithcode.com/dataset/imdb-movie-reviews), [MNIST](https://paperswithcode.com/dataset/mnist), [Yahoo! Answers](https://paperswithcode.com/dataset/yahoo-answers)
11 Replies

Loading