Abstract: Influenza seriously endangers human health and even causes a large number of deaths every year. Transformers for Influenza-like illness (ILI) forecasting have recently been proven effective. However, these end-to-end deep models are mathematically almost black-box, hindering us from inferring the specific roles and functionalities of each layer within the models, which constitutes a common key challenge in deep neural networks. At the same time, explainability helps trust and use AI systems effectively. In this paper, we propose an efficient ILI forecasting framework incorporating a patching design and variable-channel pairs, which can accommodate any white-box transformer, thereby endowing ILI forecasting with both interpretability and analytical accuracy. Through extensive experimental validation, leveraging white-box transformers such as CRATE, our White-box Time Series Transformer (WhiteTST) framework achieves the state-of-the-art accuracy on ILI datasets. We visually present the self-attention maps within WhiteTST to indicate further explainability. Our results suggest a pathway for designing white-box foundational models for ILI forecasting that concurrently exhibit high accuracy and interpretability. The code is available online in https://github.com/HITshenrj/WhiteTST.
Loading