Efficient Neural Network-Based Estimation of Interval Shapley Values

Published: 01 Jan 2024, Last Modified: 07 Apr 2025IEEE Trans. Knowl. Data Eng. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The use of Shapley Values (SVs) to explain machine learning model predictions is established. Recent research efforts have been devoted to generating efficient Neural Network-based SVs estimates. However, the variability of the generated estimates, which depend on the selected data sampling, model, and training parameters, brings the reliability of such estimates into question. By leveraging the concept of Interval SVs, we propose to incorporate SVs uncertainty directly into the learning process. Specifically, we explain ensemble models composed of multiple predictors, each one generating potentially different outcomes. Unlike all existing approaches, the explainer design is tailored to Interval SVs learning instead of SVs only. We present three new Network-based explainers relying on different ISV paradigms, i.e., a Multi-Task Learning network inspired by the Shapley value's weighted least squares characterization and two Interval Shapley-Like Value Neural estimators. The experiments thoroughly evaluate the new approaches on ten benchmark datasets, looking for the best compromise between intervals’ accuracy and explainers’ efficiency.
Loading