Triplet learning of task representations in latent space for continual learningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Continual Learning, Triplet Learning, Image Generation
Abstract: Continual learning is a mechanism where a model is trained on tasks sequentially and learns the current task while retaining the knowledge of the previous tasks. Researchers have done studies on methods that utilize latent space, which include rehearsal with latent spaces and latent space partitioning. However, latent space overlapping can cause interference between knowledge of different tasks, leading to performance drop on specific metrics, e.g. classification accuracy or quality of image reconstruction. To solve this problem, we propose a method of training an autoencoder with triplet loss applied to partition its latent space. We denote the output of the encoder and some manually chosen layer of the decoder as original latent space O and common latent space C, respectively. Specifically, to mitigate the overlapping, we use triplet loss in the common latent space: (1) cluster the latent variables of the data from the same class to make its latent space not too dispersive, and (2) push the latent spaces of data away from different classes. We tested our method on several datasets, including MNIST, FashionMNIST, and CelebA. The experimental results show that our proposed model achieved an FID of 19 on MNIST and a recall of 0.272 on CelebA, which are better results than state-of-the-art models when trained under similar setups. Qualitatively, we achieve better partitioning results by comparing the visualization of latent space with other latent space methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading