PerturbFormer: Adversarial Graph Transformers for Scalable and Resilient Representation Learning

13 Sept 2025 (modified: 27 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Graph Learning, Transformer Architectures, Multi-scale Embeddings, Generative Pretraining, Adaptive Signal Calibration, Vertex Classification, Computational Efficiency
TL;DR: PerturbFormer trains robust graph transformers by having a GAN perturb edges while a confidence-aware residual module self-corrects, beating SOTA on large homophilous and heterophilous benchmarks with fewer parameters.
Abstract: We introduce PerturbFormer, a unified framework for node-level representation learning that addresses three persistent limitations in modern graph models: transformer attention degradation under low homophily, vulnerability to structural perturbations, and the high cost of large-scale inference. PerturbFormer integrates multi-scale structural synthesis with contrastive pretraining to produce geometry-aware embeddings, a heterophily-adaptive transformer backbone guided by learned structural cues, and an end-to-end adversarial propagation module where a generator proposes plausible edge modifications while a discriminator maintains semantic consistency. A node-confidence-weighted residual correction further adjusts propagation strength at fine granularity and enables practical contractivity controls for stable iterative refinement. The combined design enhances robustness and predictive quality on both homophilous and heterophilous benchmarks while keeping parameter and runtime costs competitive. Practical guidelines and implementation details are included to support effective application of the framework.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 4818
Loading