Spatially Resolved Temporal Networks: Online Unsupervised Representation Learning of High Frequency Time SeriesDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: High frequency time series, Representation Learning, Online Learning
TL;DR: Unsupervised representation Learning to generate clinically interpretable waveforms.
Abstract: Univariate high-frequency time series are dominant data sources for many medical, economic and environmental applications. In many of these domains, the time series are tied to real-time changes in state. In the intensive care unit, for example, changes in an electrocardiogram signal can indicate a heart attack, and intracranial pressure waveforms can indicate whether a patient is developing decreased blood perfusion to the brain. However, most representation learning to resolve states is conducted in an offline, batch-dependent manner. In high frequency time-series, high intra-state and inter-sample variability makes offline, batch-dependent learning a relatively difficult task. Hence, we propose Spatial Resolved Temporal Networks (SpaRTeN), a novel composite deep learning model for online, unsupervised representation learning through a spatially constrained latent space. We simultaneously train two distinct blocks: a recurrent neural network ensemble $f_R$ that captures states in high frequency time series, and a spatial block $f_S$ that spatially resolves state changes from the predictions generated by $f_R$. The spatial block $f_S$ identifies the block in $f_R$ that best fits the current state of the time series, and the training procedure for $f_R$ optimizes that block. This procedure corresponds to a minimax framework. When $f_S$ and $f_R$ are deep neural networks, the entire system can be trained via back-propagation. Finally, we demonstrate the application of this framework to online forecasting and interpretable, zero-shot clustering. We compare and demonstrate that SpaRTeN outperforms spectral clustering and a Gaussian mixture model.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Machine Learning for Sciences (eg biology, physics, health sciences, social sciences, climate/sustainability )
10 Replies

Loading