Abstract: Recently, spatiotemporal graph convolutional networks have achieved dominant performance in spatiotemporal prediction tasks. However, most models relying on node-to-node messaging interaction exhibit sensitivity to spatiotemporal shifts, encountering out-of-distribution (OOD) challenges. To address these issues, we introduce \textbf{\underline{S}}patio-\textbf{\underline{T}}emporal \textbf{\underline{O}}OD \textbf{\underline{P}}rocessor (STOP), which employs a centralized messaging mechanism along with a message perturbation mechanism to facilitate robust spatiotemporal interactions. Specifically, the centralized messaging mechanism integrates Context-Aware Units for coarse-grained spatiotemporal feature interactions with nodes, effectively blocking traditional node-to-node messages. We also implement a message perturbation mechanism to disrupt this messaging process, compelling the model to extract generalizable contextual features from generated variant environments. Finally, we customize a spatiotemporal distributionally robust optimization approach that exposes the model to challenging environments, thereby further enhancing its generalization capabilities. Compared with 14 baselines across six datasets, STOP achieves up to \textbf{17.01\%} improvement in generalization performance and \textbf{18.44\%} improvement in inductive learning performance. The code is available at https://github.com/PoorOtterBob/STOP.
Lay Summary: Spatiotemporal data, such as traffic patterns or air quality measurements, often needs to be predicted for decision-making in real-world applications. However, existing models that rely heavily on direct interactions between nodes in a graph often struggle when the data environment changes unexpectedly—a challenge known as out-of-distribution (OOD) issues. To address these challenges, we developed a new model called STOP (Spatio-Temporal OOD Processor).
STOP uses a novel centralized messaging system, which avoids the limitations of traditional node-to-node communication by instead focusing on higher-level, context-aware interactions. Additionally, to make the model more robust, STOP deliberately disrupts its own messaging process during training, forcing it to learn more generalizable patterns that can adapt to new environments. Finally, we use a special optimization approach that further strengthens the model by exposing it to challenging and diverse data environments during training.
When tested on six different datasets, STOP outperformed 14 existing baseline models, improving generalization performance by up to 17.01% and inductive learning performance by up to 18.44%. This means STOP is better equipped to handle unexpected changes in data environments, making it a powerful tool for spatiotemporal predictions. The code for STOP is publicly available on GitHub for further research and application.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/PoorOtterBob/STOP
Primary Area: Applications->Time Series
Keywords: Spatiotemporal learning, out-of-distribution, spatiotemporal prediction
Submission Number: 5800
Loading