Keywords: Time series, irregular time series, self-supervised learning
Abstract: Multivariate time series regression, encompassing forecasting and interpolation, is crucial for numerous real-world applications, particularly in healthcare, climate science, ecology, and others. While recent work has focused on improving modeling for time series regression, two main limitations persist. First, the prevalence of irregularly sampled time series with missing values poses significant challenges.
For instance, healthcare applications often involve predicting future or missing observations from irregular data to enable continuous patient monitoring and timely intervention. As current approaches mainly rely on the assumptions of regular time series such as strong periodicity, when applied to irregular ones they exhibit performance degradation. Second, while some state-of-the-art methods (SOTA) do model irregularity and perform regression tasks on irregular data, they are often trained in a fully supervised manner. This limits their ability to generalize easily to different domains (e.g., training and testing datasets with different numbers of variables). To address these challenges, we propose GITaR, a Generalized Irregular Time Series Regression model via masking and Reconstruction pertaining mechanism, aiming to capture the inherent irregularity in time series and learn robust, generalizable representations without supervision for downstream regression tasks. Comprehensive experiments on common real-world regression tasks in healthcare, human activity recognition, and climate science underline the superior performance of GITaR compared to state-of-the-art methods. Our results highlight our model’s unique capability to generalize across different domains, demonstrating the potential for broad applicability in various fields requiring accurate temporal prediction and interpolation.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5417
Loading