Domain-Aware Gradient Reuse for Anomaly Detection

15 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Transfer Learning, Anomaly Detection, Gradient Reuse
Abstract: Anomaly detection relies on recognizing patterns that diverge from normal behavior, yet practical deployment is hampered by the inherent scarcity and heterogeneity of anomalous instances. These challenges prevent the training set from faithfully characterizing the underlying anomaly distribution, thereby fundamentally constraining the development of effective discriminative models for anomaly detection. Inspired by the observed consistency of gradient distributions across related domains during training, Domain-Aware Gradient Reuse (DAGR) is introduced as a transfer‑learning framework that leverages this property. DAGR first learns an adaptive transformation by aligning source and target normal gradients, thereby neutralizing domain‑specific effects. The same map then pushes forward the source anomalous gradients to computing estimated target anomalous gradients, which are combined with the true target normal gradients to guide the target‑domain detector without labeled anomalies. This paper establishes a rigorous convergence proof that reinforces the framework’s theoretical foundation. Comprehensive experiments on image and audio datasets demonstrate that the proposed method achieves state-of-the-art performance.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 6163
Loading