Gaussian Process-Based Corruption-resilience Forecasting Models

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Time Series Forecasting, Denoising Models, Gaussian Process Models, Neural Networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Enhancing time series forecasting by denoising Gaussian Process corruption via a corrupt-denoise-forecasting framework
Abstract: Time series forecasting is challenging due to complex temporal dependencies and unobserved external factors, which can lead to incorrect predictions by even the best forecasting models. Using more training data is one way to improve the accuracy, but this source is often limited. In contrast, we are building on successful denoising approaches for image generation. When a time series is corrupted by the common isotropic Gaussian noise, it yields unnaturally behaving time series. To avoid generating unnaturally behaving time series that do not represent the true error mode in modern forecasting models, we propose to employ Gaussian Processes to generate smoothly-correlated corrupted time series. However, instead of directly corrupting the training data, we propose a joint forecast-corrupt-denoise model to encourage the forecasting model to focus on accurately predicting coarse-grained behavior, while the denoising model focuses on capturing fine-grained behavior. All three parts are interacting via a corruption model which enforces the model to be resilient. Our extensive experiments demonstrate that our proposed corruption-resilient forecasting approach is able to improve the forecasting accuracy of several state-of-the-art forecasting models as well as several other denoising approaches
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7724
Loading