Abstract: Graph Neural Networks (GNNs) rely on both node-edge features and graph structure, but the relative use of these information sources is poorly understood. In many cases either features or structure contain more useful information, and in extreme cases one may inhibit learning, as in some tasks where models overfit on structural patterns. Understanding the balance of these information sources is therefore essential for strategic model design.
We introduce Noise-Noise Analysis to measure each source’s contribution to model performance, along with the Noise-Noise Ratio Difference (NNRD) metric that quantifies whether a model is feature-reliant or structure-reliant. Through experiments on synthetic and real-world graph-classification datasets, we show that GCN, GAT, and GIN layers can all perform graph-less learning (ignoring structure when unhelpful), but only the GIN performs feature-less learning. All three architectures exhibit bias toward features over structure. Noise-Noise Analysis provides practitioners a fast tool to understand their models’ information usage.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Giannis_Nikolentzos1
Submission Number: 7445
Loading