Keywords: hodge decomposition, simplicial complexes, spectral simplicial theory, simplicial neural network, stability
Abstract: Neural networks on simplicial complexes (SCs) can learn from data residing on simplices such as nodes, edges, triangles, etc.
However, existing works often overlook the Hodge theory that decomposes simplicial data into three orthogonal characteristic subspaces, such as the identifiable gradient, curl and harmonic components of edge flows.
In this paper, we aim to incorporate this data inductive bias into learning on SCs.
Particularly, we present a general convolutional architecture
which respects the three key principles of uncoupling the lower and upper simplicial adjacencies, accounting for the inter-simplicial couplings, and performing higher-order convolutions.
To understand these principles, we first use Dirichlet energy minimizations on SCs to interpret their effects on mitigating the simplicial oversmoothing.
Then, through the lens of spectral simplicial theory,
we show the three principles promote the Hodge-aware learning of this architecture, in the sense that the three Hodge subspaces are invariant under its learnable functions and the learning in two nontrivial subspaces are independent and expressive.
To further investigate the learning ability of this architecture, we also study it is stable against small perturbations on simplicial connections.
Finally, we experimentally validate the three principles by comparing with methods that either violate or do not respect them.
Overall, this paper bridges learning on SCs with the Hodge decomposition, highlighting its importance for rational and effective learning from simplicial data.
Supplementary Material: zip
Submission Number: 7111
Loading