MIRAGE: Modelling Interpretable Multivariate Time Series Forecasts with Actionable Ground Explanations

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Representation Learning, Interpretable Representations, Explainability, Forecasting, Markov Models, LSTM, Attention Networks, Clustering
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Modelling Interpretable Multivariate Time Series Forecasts with Actionable Ground Explanations
Abstract: Multi-variate Time Series (MTS) forecasting has made large strides (with very negligible errors) through recent advancements in neural networks, e.g., Trans- formers. However, in critical situations like predicting a death in an ICU or sudden gaming overindulgence; an accurate prediction without a contributing evidence is irrelevant. It is important to have model driven Interpretability, allowing proactive comprehension of trajectory to an extremity; and an associated Explainability, al- lowing for preventive steps; e.g., controlling BP to avoid death, or nudging players to take breaks to prevent overplay. We introduce a novel deep neural network, MI- RAGE, which overcomes the inter-dependent challenges of—(a) temporally non- smooth data trajectories for interpretability; (b) highly multi-dimensional tempo- ral space for explainability; and (c) improving forecasting accuracy—all at once. MIRAGE: (i) achieves over 85% improvement on the MSE of the forecasts on the most relevant SOM-VAE based SOTA networks; and (ii) unravels the intricate multi-variate relationships and temporal trajectories contributing to any sudden movement to criticalities on temporally chaotic datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7808
Loading