Abstract: For time-series classification and retrieval applications, an important requirement is to develop representations/metrics that
are robust to re-parametrization of the time-axis. Temporal
re-parametrization as a model can account for variability in
the underlying generative process, sampling rate variations, or
plain temporal mis-alignment. In this paper, we extend prior
work in disentangling latent spaces of autoencoding models,
to design a novel architecture to learn rate-invariant latent
codes in a completely unsupervised fashion. Unlike conventional neural network architectures, this method allows to explicitly disentangle temporal parameters in the form of orderpreserving diffeomorphisms with respect to a learnable template. This makes the latent space more easily interpretable.
We show the efficacy of our approach on a synthetic dataset
and a real dataset for hand action-recognition.
0 Replies
Loading