Abstract: Unsupervised learning and meta-learning share a common goal of enhancing learning efficiency compared to starting from scratch. However, meta-learning methods are predominantly employed in supervised settings, where acquiring labels for meta-training is costly and new tasks are limited to a predefined distribution of training tasks. In this paper, we introduce a novel unsupervised meta-learning framework that leverages spherical latent representations defined on a unit hypersphere. Unlike the state-of-the-art unsupervised meta-learning approach that assumes a Gaussian mixture prior over latent representations, we utilize a von Mises-Fisher mixture model for constructing the latent space. This alternative formulation leads to a more stable optimization process and improved performance. To enhance the generative capability of our model, we unify the variational autoencoder (VAE) and the generative adversarial network (GAN) within our unsupervised meta-learning framework. Moreover, we propose a dual VAE-GAN framework to impose a reconstruction constraint on both the latent representations and their corresponding transformed versions, thereby yielding more representative and discriminative representations. The efficacy of our proposed unsupervised meta-learning framework is demonstrated through extensive comparisons with existing methods on diverse benchmark datasets.
Loading