Abstract: With the growing adoption of Hypergraph Neural Networks (HNNs) to model higher-order relationships in complex data, concerns about their security and robustness have become increasingly important. However, current security research often overlooks the unique structural characteristics of hypergraph models when developing adversarial attack and defense strategies. To address this gap, we demonstrate that hypergraphs are particularly vulnerable to node injection attacks, which align closely with real-world applications. Through empirical analysis, we develop a relatively unnoticeable attack approach by monitoring changes in homophily and leveraging this self-regulating property to enhance stealth. Building on these insights, we introduce HyperNear, i.e., $\underline{N}$ode inj$\underline{E}$ction $\underline{A}$ttacks on hype$\underline{R}$graph neural networks, the first node injection attack framework specifically tailored for HNNs. HyperNear integrates homophily-preserving strategies to optimize both stealth and attack effectiveness. Extensive experiments show that HyperNear achieves excellent performance and generalization, marking the first comprehensive study of injection attacks on hypergraphs. Our code is available at https://github.com/ca1man-2022/HyperNear.
Lay Summary: Modern AI tools are getting better at understanding complex systems, such as how people interact on social media or how diseases spread. A new kind of AI model, called a hypergraph neural network, is especially good at this. But we found that it may also be easier to fool than expected.
Our research shows that by adding just a few fake data points, these systems can be misled into making wrong predictions. These fake points can be carefully crafted to blend in, making the attack hard to notice. We built a tool called HyperNear to study this issue. It creates smart, hidden attacks that work well even when the attacker doesn’t know much about the system.
This is the first study to show how vulnerable these models can be in such situations. We hope our work will help researchers build more secure AI systems that are ready for the real world.
Link To Code: https://github.com/ca1man-2022/HyperNear
Primary Area: Deep Learning->Graph Neural Networks
Keywords: hypergraph neural network, homophily, attack
Submission Number: 1046
Loading