TL;DR: Learning to predict hypergraph structure in an unsupervised way, only from the downstream task signal.
Abstract: The importance of higher-order relations is widely recognized in numerous real-world systems. However, annotating them is a tedious and sometimes even impossible task. Consequently, current approaches for data modelling either ignore the higher-order interactions altogether or simplify them into pairwise connections.
To facilitate higher-order processing, even when a hypergraph structure is not available, we introduce SPHINX, a model that learns to infer a latent hypergraph structure in an unsupervised way, solely from the final task-dependent signal. To ensure broad applicability, we design the model to be end-to-end differentiable, capable of generating a discrete hypergraph structure compatible with any modern hypergraph networks, and easily optimizable without requiring additional regularization losses.
Through extensive ablation studies and experiments conducted on four challenging datasets, we demonstrate that our model is capable of inferring suitable latent hypergraphs in both transductive and inductive tasks. Moreover, the inferred latent hypergraphs are interpretable and contribute to enhancing the final performance, outperforming existing methods for hypergraph prediction.
Lay Summary: Modeling group interactions among multiple entities simultaneously presents a challenging yet broadly applicable problem, with relevance across diverse domains such as chemistry, physics, medicine, and social networks. However, leveraging specialized architectures like hypergraph neural networks is infeasible without accurate annotations of these complex relationships. To address this, we introduce SPHINX—a novel model that jointly infers the underlying hypergraph structure and performs the downstream task using only supervision from the task itself. This approach not only enables the modeling of previously unobserved higher-order interactions but also provides interpretable visualizations of the discovered structures, enhancing the transparency and explainability of the model.
Link To Code: MzM1N
Primary Area: Deep Learning->Graph Neural Networks
Keywords: hypergraph prediction, hypergraph representation learning
Submission Number: 9533
Loading