Abstract: This work studies the problem of time series analysis with generalist models, or models trained across many data domains. Drawing inspiration from the widespread success of large language models, we consider the simple strategy of discretely tokenizing time series data drawn from a myriad of datasets via self-supervision, then using the fixed tokenization to solve a variety of tasks across many data domains. Canonically time series models are either trained on a single dataset, or built in a task specific manner (e.g. only a forecaster), or use patches of time as inputs to the model. As such, performant generalist, multi-task, discrete representation time series models are of value. Our method, TOkenized Time Series EMbeddings (TOTEM), produces such generalist time series models with minimal or no fine-tuning, while exhibiting strong zero-shot performance. We evaluate TOTEM extensively over nearly 500 experiments on three commonly-studied time series tasks with real-world data: imputation (17 baselines, 12 datasets), anomaly detection (19 baselines, 25 datasets), and forecasting (14 baselines, 12 datasets). We conclude that TOTEM matches or outperforms existing state-of-the-art models in both the canonical specialist setting (i.e., training one model on one domain) as well as the generalist setting (i.e., training a single model on many domains), which demonstrates the efficacy of tokenization for general time series analysis.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Giannis_Nikolentzos1
Submission Number: 3230
Loading