Keywords: Zero-shot Learning, Graph Machine Learning, Large Language Models, Semantic Alignment
Abstract: This paper studies the problem of zero-shot text-attributed graph learning, which aims to generate high-quality node representations in unseen text-attributed graphs. Recent approaches usually utilize large language models (LLMs) instead of graph neural networks (GNNs) to extract semantics due to their strong generalization ability, which could neglect the intrinsic geometric structure. Towards this end, we propose a novel approach named $\underline{\text{P}}$rototypical M$\underline{\text{u}}$tual P$\underline{\text{r}}$ompting $\underline{\text{E}}$nhancement (PURE) for zero-shot text-attributed graph learning. The core of our PURE is to generate high-quality prompts using prototypical learning to combine the advantages of both language models and graph models. In particular, we first utilize dual graph pre-training from both instance and informativeness perspectives to generate a generalizable GNN. Then, we incorporate the frozen language and graph models into a mutual prompt learning framework. On the one hand, we extract node tokens with geometric relationships using the graph model, which will be sent to multiple prototypical projections to enhance the understanding of the language model. On the other hand, we extract graph information and task descriptions using the language model, which serves as instruction for the graph models. Extensive experiments on both node classification and link predictions validate the effectiveness of PURE compared to competing baselines.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 14723
Loading