everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Despite the success of graph neural networks (GNNs), their vulnerability to adversarial attacks poses tremendous challenges for practical applications. Existing defense methods suffer from severe performance decline under some unknown attacks, due to either limited observed adversarial examples (adversarial training) or pre-defined heuristics (graph purification or robust aggregation). To address these limitations, we analyze the causalities in graph adversarial attacks and conclude that causal features are desirable to achieve graph adversarial robustness, owing to their determinedness for labels and invariance across attacks. To learn these causal features, we innovatively propose an Invariant causal DEfense method against adversarial Attacks (IDEA). We derive node-based and structurebased invariance objectives from an information-theoretic perspective. IDEA is provably a causally invariant defense across various attacks. Extensive experiments demonstrate that IDEA significantly outperforms all baselines under both poisoning and evasion attacks on five benchmark datasets, highlighting its strong and invariant predictability. The implementation of IDEA is available at https://anonymous.4open.science/r/IDEA_repo-666B.