Interpretable and Parameter Efficient Graph Neural Additive Models with Random Fourier Features

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
Keywords: Interpretability, Graph Neural networks, Finite Impulse Response Filters, Random Fourier Features
TL;DR: Parameter efficient interpretable neural additive model for graph data
Abstract: Graph Neural Networks \texttt{(GNNs)} excel at jointly modeling node features and topology, yet their \emph{black-box} nature limits their adoption in real-world applications where interpretability is desired. Inspired by the success of interpretable Neural Additive Models \texttt{(NAM)} for tabular data, Graph Neural Additive Network \texttt{(GNAN)} extends the additive modeling approach to graph data to overcome limitations of GNNs. While being interpretable, \texttt{GNAN} representation learning overlooks the importance of local aggregation and more importantly suffers from parameter complexity. To mitigate the above challenges, we introduce Graph Neural Additive Model with Random Fourier Features (\texttt{G-NAMRFF}), a lightweight, self‐interpretable graph additive architecture. \texttt{G-NAMRFF} represents each node embedding as the sum of feature‐wise contributions where contributions are modeled via a \emph{Gaussian process} \texttt{(GP)} with a graph- and feature-aware kernel. Specifically, we construct a kernel using Radial Basis Function (\texttt{RBF}) with graph structure induced by Laplacian and learnable Finite Impulse Response (\texttt{FIR}) filter. We approximate the kernel using Random Fourier Features (\texttt{RFFs}) which transforms the \texttt{GP} prior to a Bayesian formulation, which are subsequently learnt using a single layer neural network with size equal to number of \texttt{RFF} features. \texttt{G-NAMRFF} is light weight with $168\times$ fewer parameters compared to \texttt{GNAN}. Despite its compact size, \texttt{G-NAMRFF} matches or outperforms state-of-the-art \texttt{GNNs} and \texttt{GNAN} on node and graph classification tasks, delivering real-time interpretability without sacrificing accuracy.
Supplementary Material: zip
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 16787
Loading