Keywords: Deep Learning; Graph Neural Networks; Graph Pooling
TL;DR: We propose a novel mechanism for learning from intermediate activations of GNNs called HistoGraph. We discuss its properties and demonstrate its effectiveness by evaluating it on a variety of graph benchmarks.
Abstract: Graph Neural Networks (GNNs) have demonstrated remarkable success in various domains such as social networks, molecular chemistry, and more. A crucial component of GNNs is the pooling procedure, in which the node features calculated by the model are combined to form an informative final descriptor to be used for the downstream task. However, previous graph pooling schemes rely on the last GNN layer features as an input to the pooling or classifier layers, potentially under-utilizing important activations of previous layers produced during the forward pass of the model, which we regard as \emph{historical graph activations}. This gap is particularly pronounced in cases where a node’s representation can shift significantly over the course of many graph neural layers, and worsen by graph-specific challenges such as over-smoothing in deep architectures. To bridge this gap, we introduce HistoGraph, a novel two‑stage attention‑based final aggregation layer that first applies a unified layer-wise attention over intermediate activations, followed by node-wise attention. By modeling the evolution of node representations across layers, our HistoGraph leverages both the activation history of nodes and the graph structure to refine features used for final prediction. Empirical results on multiple graph classification benchmarks demonstrate that HistoGraph offers strong performance that consistently improves traditional techniques, with particularly strong robustness in deep GNNs.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21084
Loading