Abstract: We introduce *GraphGPT*, a novel self-supervised *generative pre-trained* model for graph learning based on the *Graph Eulerian Transformer* (**GET**). First, we propose **GET**, which combines a standard transformer encoder or decoder architecture with an innovative graph-to-sequence transformation method. This method converts graphs or sampled subgraphs into sequences of tokens representing nodes, edges, and attributes in a reversible manner using Eulerian paths. We pre-train **GET** using either of the two self-supervised tasks: next-token prediction (NTP) and scheduled masked-token prediction (SMTP). The pre-trained model is then fine-tuned for downstream tasks such as graph-, edge-, and node-level prediction. Despite its simplicity, GraphGPT achieves performance comparable to or surpassing state-of-the-art methods on multiple large-scale Open Graph Benchmark (OGB) datasets. It demonstrates exceptional results on the molecular property prediction dataset PCQM4Mv2 and the protein-protein interaction dataset ogbl-ppa. Notably, generative pre-training enables scaling GraphGPT to 2 billion parameters while maintaining performance gains — a breakthrough that overcomes the scalability limitations of traditional Graph Neural Networks (GNNs) and prior graph transformers (GTs). To advance research in graph foundation models and facilitate scientific discovery in chemistry, materials science, and related fields, we have released the
source code (https://github.com/alibaba/graph-gpt) and model checkpoints (https://www.modelscope.cn/organization/Alibaba-DT).
Lay Summary: Analyzing complex networks like molecular structures or social connections is challenging for AI. Traditional methods struggle with large-scale data, limiting advancements in related research areas like material and drug discovery.
Inspired by language models that learn from vast text data, we developed GraphGPT. It converts graphs into sequences using a novel method that captures all connections without losing information. This allows the model to learn patterns through self-supervised tasks, eliminating the need for manual adjustments.
GraphGPT excels at predicting molecular properties and protein interactions, outperforming existing methods. It scales to billions of parameters, enabling analysis of massive networks like those in chemistry and biology. By releasing our code and models, we aim to accelerate scientific discoveries in medicine, materials science, and beyond, empowering researchers to tackle real-world challenges more effectively.
Link To Code: https://github.com/alibaba/graph-gpt
Primary Area: Deep Learning->Foundation Models
Keywords: Graph, GraphGPT, GPT, Foundation Model, Pre-train
Submission Number: 4861
Loading