Continual Zero-shot Learning through Semantically Guided Generative Random WalksDownload PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Continual Learning, Zero-shot Learning, Random Walk
Abstract: Learning new knowledge, not forgetting previous ones, and adapting it to future tasks occur simultaneously throughout a human's lifetime. However, this learning procedure is mostly studied individually in deep learning either from the perspective of lifetime learning without forgetting (continual learning) or adaptation to recognize unseen tasks (zero-shot learning, ZSL). Continual ZSL (CZSL), the desired and more natural learning setting, has been introduced in recent years and is most developed in the transductive setting, which is unrealistic in practice. In this paper, we focus on inductive continual generalized zero-shot learning (CGZSL) by generative approach, where no unseen class information is provided during the training. The heart of the success of previous generative-based approaches is that learn quality representations from seen classes to improve the generative understanding of the unseen visual space. Motivated by this, we first introduce generalization bound tools and provide the first theoretical explanation for the benefits of generative modeling to ZSL and CZSL tasks. Second, we develop a pure Inductive Continual Generalized Zero-Shot Learner using our theoretical analysis to guide the improvement of the generation quality. The learner employs a novel semantically-guided Generative Random Walk (GRW) loss, where we encourage high transition probability, computed by random walk, from seen space to a realistic generative unseen space. We also demonstrate that our learner continually improves the unseen class representation quality, achieving state-of-the-art performance on AWA1, AWA2, CUB, and SUN datasets and surpassing existing CGZSL methods by around 3-7\% on different datasets. Code is available here https://anonymous.4open.science/r/cgzsl-76E7/main.py
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2308.12366/code)
5 Replies

Loading