Multi-Objective Model Selection for Time Series ForecastingDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: time series, forecasting, model selection, multiobjective optimization, transfer-learning, tabular dataset.
Abstract: Research on time series forecasting has predominantly focused on developing methods that improve accuracy. However, other criteria such as training time or latency are critical in many real-world applications. We therefore address the question of how to choose an appropriate forecasting model for a given dataset among the plethora of available forecasting methods when accuracy is only one of many criteria. For this, our contributions are two-fold. First, we present a comprehensive benchmark, evaluating 7 classical and 6 deep learning forecasting methods on 44 heterogeneous, publicly available datasets. The benchmark code is open-sourced along with evaluations and forecasts for all methods. These evaluations enable us to answer open questions such as the amount of data required for deep learning models to outperform classical ones. Second, we leverage the benchmark evaluations to learn good defaults that consider multiple objectives such as accuracy and latency. By learning a mapping from forecasting models to performance metrics, we show that our method ParetoSelect is able to accurately select models from the Pareto front — alleviating the need to train or evaluate many forecasting models for model selection. To the best of our knowledge, ParetoSelect constitutes the first method to learn default models in a multi-objective setting.
One-sentence Summary: A method and a benchmark to learn good defaults for time-series forecasting models that optimize not only accuracy but also latency.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2202.08485/code)
10 Replies

Loading