Explainable Multi-Objective Model Selection for Time Series Forecasting

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Meta-learning, Time Series Forecasting, Resource-aware ML, Explainability, Trustworthy AI
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We introduce X-PCR, a novel approach for making model selection both explainable and resource-aware, and successfully apply in the domain of time series forecasting.
Abstract: Machine learning (ML) models exhibit miscellaneous properties, and deployment inevitably trades certain performance aspects against others. This is particularly valid for time series forecasting, where special characteristics such as seasonality impact how models perform. Since there is “no free lunch”, practitioners have to choose among available methods when assembling new learning systems. Benchmarks, meta-learning, and automated ML come to aid, but in many cases focus on predictive capabilities while ignoring other aspects such as complexity and resource consumption. This is especially concerning considering the popularity of deep neural networks (DNNs) for forecasting, as these models are widely conceived as computation-heavy black boxes. To alleviate these shortcomings, we propose X-PCR – a novel approach for explainable multi-objective model selection. It uses meta-learning to assess the suitability of any model in terms of (p)redictive error, (c)omplexity and (r)esource demand. By allowing users to prioritize the individual objectives in this trade-off, model recommendations become both controllable and understandable. We demonstrate the feasibility of our methodology in the task of forecasting time series with state-of-the-art DNNs. In total, we perform over 1000 experiments across 114 data sets, discuss the resulting efficiency landscape, and provide evidence of how X-PCR outperforms other selection approaches. On average, our approach only requires 20% of computation costs for recommending models with 85% of the best possible performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2425
Loading