Spatio-Temporal Graph Learning with Large Language Model

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Spatio-Temporal Graph, Contrastive Learning, Large Language Model
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Spatio-temporal prediction holds immense significance in urban computing as it enables decision-makers to anticipate critical phenomena such as traffic flow, crime rates, and air quality. Researchers have made remarkable progress in this field by leveraging the graph structure inherent in spatio-temporal data and harnessing the power of Graph Neural Networks (GNNs) to capture intricate relationships and dependencies across different time slots and locations. These advancements have significantly improved representation learning, leading to more accurate predictions. This study focuses on exploring the capacity of Large Language Models (LLMs) to handle the dynamic nature of spatio-temporal data in urban systems. The proposed approach, called STLLM, integrates LLMs with a cross-view mutual information maximization paradigm to capture implicit spatio-temporal dependencies and preserve spatial semantics in urban space. By harnessing the power of LLMs, the approach effectively captures intricate and implicit spatial and temporal patterns, resulting in the generation of robust and invariant LLM-based knowledge representations. In our framework, the cross-view knowledge alignment ensures effective alignment and information preservation across different views while also facilitating spatio-temporal data augmentation. The effectiveness of STLLM is evaluated through theoretical analyses, extensive experiments, and additional investigations, demonstrating its ability to align LLM-based spatio-temporal knowledge and outperform state-of-the-art baselines in various prediction tasks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2164
Loading