Implicit Hypergraph Neural Networks: A Stable Framework for Higher-Order Relational Learning with Provable Guarantees
Keywords: graph neural networks, hypergraph neural networks, implicit models
Abstract: Many real-world interactions are group-based rather than pairwise, e.g., papers with multiple co-authors or users jointly engaging with items. Hypergraph neural networks (HGNNs) capture such higher-order relations, but fixed-depth message passing can miss long-range dependencies and destabilize training as depth grows.
We introduce Implicit Hypergraph Neural Network (IHGNN), bringing the implicit equilibrium formulation to hypergraphs: instead of stacking layers, IHGNN computes representations as the solution to a nonlinear fixed-point equation, enabling stable, efficient global propagation across hyperedges without deep architectures. We develop a well-posed training scheme with provable convergence, characterize conditions for oversmoothing and the model’s expressivity, and derive a transductive generalization bound on hypergraphs. Training uses an implicit-gradient method coupled with a projection-based stabilizer.
On citation benchmarks, IHGNN consistently outperforms strong graph and hypergraph baselines in both accuracy and robustness, and is notably resilient to random initialization and hyperparameter variation—highlighting strong generalization and practical value for higher-order relational learning.
Submission Number: 63
Loading