LOAT: Latent-Order Adversarial Training for Efficient and Transferable Robustness

ICLR 2026 Conference Submission16007 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: adversarial training
TL;DR: The paper creates a novel teacher-student model used in adversarial training that captures latent ordering in an adaptive and efficient manner.
Abstract: Adversarial training remains computationally prohibitive due to the uniform application of expensive PGD (projected gradient descent) attacks across all training samples. Although prior works identify ``hard'' samples deserving of more computational effort, such approaches require supervised definitions of difficulty and do not capture the complex dynamics of how neural networks naturally learn robust representations. We present Latent-Order Adversarial Training (LOAT), a novel unsupervised method that discovers the emergent structure in adversarial training. It clusters adversarial dynamics using multiple complementary feature views to cluster structural similarities and identify an adaptive path of compatible learned dynamics to more efficiently train sub-models via a generalized set of probabilistic choices. By combining the inherent descriptors in an evolutionary learning model, LOAT creates a global model to transfer a transition matrix $T$ that captures empirical patterns of how training naturally flows between clusters. Experiments on CIFAR-10 demonstrate that this discovered structure can efficiently and adaptively allocate PGD steps per cluster, following the learned transition, reducing computational cost by 40-50\% while maintaining comparable or better robustness. The transferable global structure of our algorithm contains learnable generalizable patterns independent of potentially biased human notions. LOAT shows that respecting intrinsic dynamics yields significant efficiency gains without sacrificing robustness.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 16007
Loading