Fast Explanation of RBF-Kernel SVM Models Using Activation Patterns

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: RBF Kernel; Support Vector Machines; Activation Pattern; EEG; MEG
Abstract: Machine learning models have significantly enriched the toolbox in the field of neuroimaging analysis. Among them, Support Vector Machines (SVM) have been one of the most popular models for supervised learning, but their use primarily relies on linear SVM models due to their explainability. Kernel SVM models are capable classifiers but more opaque. Recent advances in eXplainable AI (XAI) have developed several feature importance methods to address the explainability problem. However, these explanations can be affected by noise variables which leads to irrelevant variables being regarded as important variables. This problem also appears in explaining linear models, which the linear pattern can address. In this paper, we propose a fast method to explain RBF kernel SVM globally by adopting the notion of a linear pattern in kernel space. Our method can generate global explanations with low computational cost and is less affected by noise variables. We successfully evaluate our method on simulated and real MEG/EEG datasets.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2799
Loading