IMTS is Worth Time $\times$ Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction
Abstract: Irregular Multivariate Time Series (IMTS) forecasting is challenging due to the unaligned nature of multi-channel signals and the prevalence of extensive missing data. Existing methods struggle to capture reliable temporal patterns from such data due to significant missing values. While pre-trained foundation models show potential for addressing these challenges, they are typically designed for Regularly Sampled Time Series (RTS). Motivated by the visual Mask AutoEncoder's (MAE) powerful capability for modeling sparse multi-channel information and its success in RTS forecasting, we propose **VIMTS**, a framework adapting **V**isual MAE for **IMTS** forecasting. To mitigate the effect of missing values, VIMTS first processes IMTS along the timeline into feature patches at equal intervals. These patches are then complemented using learned cross-channel dependencies. Then it leverages visual MAE's capability in handling sparse multichannel data for patch reconstruction, followed by a coarse-to-fine technique to generate precise predictions from focused contexts. In addition, we integrate self-supervised learning for improved IMTS modeling by adapting the visual MAE to IMTS data. Extensive experiments demonstrate VIMTS's superior performance and few-shot capability, advancing the application of visual foundation models in more general time series tasks. Our code is available at https://github.com/WHU-HZY/VIMTS.
Lay Summary: Imagine trying to predict something important using data that's messy – signals from multiple sensors that don't always record at the same time, or have lots of gaps. That's the challenge of Irregular Multivariate Time Series (IMTS) forecasting. Existing methods struggle with this incomplete data, and even powerful pre-trained models are usually built for perfectly clean, regular datasets.
We looked to Masked AutoEncoders (MAEs), a type of AI that's great at understanding incomplete images. We developed VIMTS, a new approach that adapts these visual MAEs for IMTS forecasting. VIMTS intelligently organizes messy data into "patches" and fills in missing information using relationships between different data channels. Then, it uses the MAE's reconstruction ability to make accurate predictions, refining them from broad to specific details. We also added a self-supervised learning step to help VIMTS learn even better from IMTS data.
Our tests show VIMTS performs exceptionally well, even when data is scarce, pushing the boundaries for applying visual AI in diverse time series problems.
Primary Area: Deep Learning->Sequential Models, Time series
Keywords: Irregular Multivariate Time Series Prediction; Visual Mask Autoencoder; Self-Supervised Learning
Submission Number: 2799
Loading