Abstract: Deep Graph Networks (DGNs), i.e. neural networks able to process graphs directly, feature an iterative message passing (MP) step that implements the node embeddings computation. However, the inductive and architectural biases of different DGNs in relation to the type and number of MP iterations are yet to be unveiled. Here, we investigate this important topic using eXplainable Artificial Intelligence (XAI) techniques for graphs. Specifically, we use the XAI metric of plausibility to detect explanatory patterns and to relate this information to the biases exploited by the underlying DGN to correctly learn graph classification tasks. We use this method to gather evidence on the rich diversity of DGN biases in relation to the type and number of iterations of MP when applied to XAI benchmarks. In addition, we show that when the MP conditions are fixed, the learned explanatory pattern may change based on the norm of the learned weights, signifying that the training procedure, in particular cases, influences the generalization dynamics.
Loading