Tackling the Generative learning trilemma through VAE and GMM-controlled latent space class expansion
Keywords: data augmentation, classifier, variational auto-encoder, gaussian mixture, latent space representation
Abstract: Achieving efficient data augmentation (DA) in time series classification is not a trivial task due to the high complexity of temporal data. Generative models, such as GANs (Generative Adversarial Networks), diffusion models, and Variational Autoencoders (VAEs), are powerful techniques to address the generative learning trilemma of producing (1) high-quality samples, (2) fast sampling, and (3) diversity. These methods vary in their ability to address the trilemma. Diffusion models allows for high diversity and high quality samples, while GAN allows for high quality samples and fast sampling, and VAE for high diversity and fast sampling. In this paper, we introduce a novel generative method, ASCENSION (VAE and GMM-controlled latent space class expansion), that retains the strengths of VAE in terms of diversity and fast sampling, while enabling controlled and quantifiable exploration of uncharted regions in the latent space. This approach not only enhances classification performance but also yields higher quality (more realistic) samples. ASCENSION leverages the probabilistic nature of the VAE's latent space to represent classes as Gaussian mixture models (GMMs). By modifying this mixture, we enable precise manipulation of class probability densities and boundaries. To ensure intra-class compactness and maximize inter-class separation, we apply clustering constraints. Empirical evaluations on the UCR benchmark (102 datasets) show that ASCENSION outperforms state-of-the-art DA methods, achieving an average classification accuracy improvement of approximately $7\$% and excelling in all aspects of the generative learning trilemma.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8222
Loading