ChronoGAM: An End-to-End One-Class Time Series Gaussian Mixture Model

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: one-class, time series, gaussian mixture, deep learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: A new time series one class classification approach based on gaussian mixture models
Abstract: Recently, several algorithms have been proposed for One Class Learning (OCL) with time series. However, several problems can be found in these methods, problems involving the collapse of hyperspheres, manual thresholds, numerical instabilities and even the use of unlabeled instances during training, which directly violates the concept of OCL. To avoid these problems and solve cases like the numerical instability of some methods this paper proposes an end-to-end method for time series one-class learning based on a Gaussian Mixture Model (GMM). The proposed method combines the unsupervised learning technique of an autoencoder adapted to extract temporal and structural features of a time series, combined with distribution learning, to provide better performance than other state-of-the-art methods for the classification of time series data. ChronoGAM is a novel method that is capable of improving the temporal importance of the representations learned by the autoencoding system. We propose a new objective function with modifications to penalize the small values on the covariance matrix without resulting in exploding gradient propagation, causing numerical instabilities, and adapting the energy calculus to avoid the use of exponential functions. The method is tested on over $85$ benchmark datasets, generating $652$ datasets. We gain in $369$ datasets, with an average ranking of $2.68$, being the top-ranked method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7095
Loading