Graph Kernel Convolutions for Interpretable Classification

ICLR 2024 Workshop DMLR Submission44 Authors

Published: 04 Mar 2024, Last Modified: 02 May 2024DMLR @ ICLR 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph kernels, interpretability, Graph Neural Networks
Abstract: State-of-the-art Graph Neural Networks (GNNs) have demonstrated remarkable performance across diverse domains, hence the growing demand for more interpretable GNN techniques. While current research predominantly centers on post hoc perturbation techniques, recent studies propose use of Graph Kernel Convolutions (GKConv) to increase GNNs interpretability intrinsically. These models employ trainable graph filters for extracting hidden features, yet their interpretability is limited since they heavily rely on multilayer perceptrons (MLPs) to make the final predictions. We argue that the latter is not necessary and it is possible to build a model that solely relies on graph kernels and a simple linear layer. Additionally, we integrate contrastive loss to encourage the learning of a more descriptive set of graph filters. In consequence, its decision-making process described through found graph filters and said linear layer is more interpretable. As a proof of concept, we propose a shallow GKConv Interpretable Classifier, which is able to achieve state-of-the-art results while exhibiting better interpretability.
Primary Subject Area: Data-centric explainable AI
Paper Type: Extended abstracts: up to 2 pages
Participation Mode: In-person
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Submission Number: 44
Loading