Keywords: Hypergraph Neural Networks, Uncertainty Quantification, Contrastive Learning
TL;DR: Uncertainty Quantification by jointly minimizing contrastive augmentations and conformal prediction loss on hypergraphs
Abstract: Hypergraph representation learning has gained immense popularity over the last few years due to its applications in real-world domains like social network analysis, recommendation systems, biological network modeling, and knowledge graphs. However, hypergraph neural networks (HGNNs) lack rigorous uncertainty estimates, which limits their deployment in critical applications where the reliability of predictions is crucial. To bridge this gap, we propose Contrastive Conformal HGNN (CCF-HGNN) that jointly accounts for aleatoric and epistemic uncertainties in hypergraph-based models for guaranteed and robust uncertainty estimates. CCF-HGNN accounts for epistemic uncertainty in HGNN predictions by producing a prediction set/interval that leverages the topological structure and provably contains the true label with a pre-defined coverage probability, while accounting for aleatoric uncertainty that originates from augmentations on the structure of the hypergraph. To enhance the power of the predictions, CCF-HGNN performs an additional auxiliary task of hyperedge degree prediction with an end-to-end differentiable sampling-based approach. Extensive experiments on real-world hypergraph datasets demonstrate the superiority of CCF-HGNN by improving the efficiency of prediction sets while maintaining valid coverage.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21680
Loading