Reinforcement learning-based secure training for adversarial defense in graph neural networks

Published: 01 Jan 2025, Last Modified: 08 Mar 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Introduces RL-based secure training algorithm using MDP and deep Q-learning to defend against GNN adversarial attacks.•Enables formal verification of GNN training via model transformation, ensuring security with tools like Prism.•Achieves higher accuracy against node-level attacks than Robust/Median GCN across multiple datasets.
Loading