Attacking Lifelong Learning Models with Gradient ReversionDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: lifelong learning, adversarial learning
TL;DR: Extensive evaluation of the robustness of episodic lifelong learning algorithm under traditional adversarial attacks and the proposed gradient reversion attack.
Abstract: Lifelong learning aims at avoiding the catastrophic forgetting problem of traditional supervised learning models. Episodic memory based lifelong learning methods such as A-GEM (Chaudhry et al., 2018b) are shown to achieve the state-of-the-art results across the benchmarks. In A-GEM, a small episodic memory is utilized to store a random subset of the examples from previous tasks. While the model is trained on a new task, a reference gradient is computed on the episodic memory to guide the direction of the current update. While A-GEM has strong continual learning ability, it is not clear that if it can retain the performance in the presence of adversarial attacks. In this paper, we examine the robustness ofA-GEM against adversarial attacks to the examples in the episodic memory. We evaluate the effectiveness of traditional attack methods such as FGSM and PGD.The results show that A-GEM still possesses strong continual learning ability in the presence of adversarial examples in the memory and simple defense techniques such as label smoothing can further alleviate the adversarial effects. We presume that traditional attack methods are specially designed for standard supervised learning models rather than lifelong learning models. we therefore propose a principled way for attacking A-GEM called gradient reversion(GREV) which is shown to be more effective. Our results indicate that future lifelong learning research should bear adversarial attacks in mind to develop more robust lifelong learning algorithms.
Code: https://drive.google.com/file/d/1zdSJ0aZR3KxoH_TDY1vMd5LFiDBS6v43/view?usp=sharing
Original Pdf: pdf
4 Replies

Loading