Kernelshap-nas: a shapley additive explanatory approach for characterizing operation influences

Published: 2025, Last Modified: 27 Jan 2026Neural Comput. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural architecture search is a rapidly growing field that has shown promising results in various applications. Existing gradient-based NAS approaches achieved high-quality results by diminishing the expense of computation. However, the updated parameters do not accurately reflect the importance of different operations. Recently, a Shapley-value-based method Shapley-NAS has been introduced to rise above the drawbacks. Shapley value calculation has been known as a NP-complete problem, which Shapley-NAS adapted and surpassed the practical constraint by Monte-Carlo sampling. However, this method raised concerns whether the sampling method is fair or not since the number of samples is too small compared to the total number of permutations in the sample space, which may lead to unstable performance. To address the problem of interpreting the model’s predictions, KernelSHAP, a well-known framework on explainable AI, precisely approximate Shapley values of each features in a dataset. KernelSHAP can potentially be utilized for approximating Shapley values in other tasks. By leveraging the power of KernelSHAP on approximating the Shapley value, we propose KernelSHAP-NAS, a more insightful approach to exploring contribution of each operator in the supernet. The proposed algorithm outperforms other existing approaches’ results in DARTS search space with the CIFAR-10 dataset. The results also show that KernelSHAP-NAS outputs better Pearson correlation and accuracy compared to other existing architecture parameter search methods on NAS-Bench-201.
Loading