Learning Frequency-aware Network for Continual LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Continual Learning, Incremental Learning, Vision Transformer
Abstract: As a challenging problem, continual learning aims to solve the problem that the model does not forget the knowledge of the old model as much as possible when learning new tasks. Most current algorithms perform the same processing on each pixel of the image. As people have different memory abilities for image details and the whole, the forgetting of different parts of the image by the neural network is also asynchronous. In this paper, we discuss the problem of asynchronous forgetting of images at different frequencies. To solve this problem, we propose a solution from the perspective of network structure design and feature preservation. In terms of network structure, we design a dual stream network with high and low frequency separation, and use the characteristics of CNN and transform to process the high-frequency and low-frequency information of images respectively; in the aspect of feature preservation, we design a dynamic distillation loss function to dynamically adjust the preserved weight of high-frequency and low-frequency information according to the training stage of the network. We have verified the effectiveness of our scheme through a series of experiments.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading