Keywords: Hypergraph neural networks, Quantization, Attention
TL;DR: Efficient hypergraph by using dual attention and adaptive quantization
Abstract: Hypergraph neural networks (HGNNs) capture higher-order relationships beyond pairwise graphs, yet most existing models suffer from a \emph{uniform capacity assumption}, allocating equal resources to all node--hyperedge interactions regardless of their informativeness. This leads to inefficiencies and degraded performance, especially under compression. Moreover, current attention mechanisms and quantization methods often fail to preserve the structural and informational properties essential for hypergraph learning. We introduce \textsc{QAdapt}, a principled framework that unifies \emph{information-theoretic attention allocation}, \emph{spectral-preserving fusion}, and \emph{co-adaptive quantization}. QAdapt adaptively assigns precision based on information density, leverages spectral fusion to capture multi-scale hypergraph structure, and learns differentiable bit-allocation policies that co-optimize attention and quantization. Extensive experiments on five benchmarks show that QAdapt delivers up to $5.4\times$ compression and $4.7\times$ speedup while achieving consistent accuracy gains of $+6.7\%$ to $+9.0\%$ over state-of-the-art quantization baselines. These results demonstrate that integrating information-theoretic attention with spectral-preserving quantization enables efficient yet accurate hypergraph learning.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 6438
Loading