Abstract: Modern temporal modeling methods, such as Transformer and its variants, have demonstrated remarkable capabilities in handling sequential data from specific domains like language and vision. Though achieving high performance with large-scale data, they often have redundant or unexplainable structures. When encountering some real-world datasets with limited observable variables that can be affected by many unknown factors, these methods may struggle to identify meaningful patterns and dependencies inherent in data, and thus, the modeling becomes unstable and unpredictable. To tackle this critical issue, in this article, we develop a novel algorithmic framework for inferring latent factors implied by the observed temporal data. The inferred factors are used to form multiple predictable and independent signal components that enable not only the reconstruction of future time series for accurate prediction but also sparse relation reasoning for long-term efficiency. To achieve this, we introduce three characteristics, i.e., predictability, sufficiency, and identifiability, and model these characteristics of latent factors via powerful deep latent dynamics models to infer the predictable signal components. Empirical results on multiple real datasets show the efficiency of our method for different kinds of time series forecasting tasks. Statistical analyses validate the predictability and interpretability of the learned latent factors.
Loading