Graph Intelligence with Large Language Models and Prompt Learning

Published: 01 Jan 2024, Last Modified: 13 May 2025KDD 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph plays a significant role in representing and analyzing complex relationships in real-world applications such as citation networks, social networks, and biological data. Graph intelligence is rapidly becoming a crucial aspect of understanding and exploiting the intricate interconnections within graph data. Recently, large language models (LLMs) and prompt learning techniques have pushed graph intelligence forward, outperforming traditional Graph Neural Network (GNN) pre-training methods and setting new benchmarks for performance. In this tutorial, we begin by offering a comprehensive review and analysis of existing methods that integrate LLMs with graphs. We introduce existing works based on a novel taxonomy that classifies them into three distinct categories according to the roles of LLMs in graph tasks: as enhancers, predictors, or alignment components. Secondly, we introduce a new learning method that utilizes prompting on graphs, offering substantial potential to enhance graph transfer capabilities across diverse tasks and domains. We discuss existing works on graph prompting within a unified framework and introduce our developed tool for executing a variety of graph prompting tasks. Additionally, we discuss the applications of combining Graphs, LLMs, and prompt learning across various tasks, such as urban computing, recommendation systems, and anomaly detection. This lecture-style tutorial is an extension of our original work published in IJCAI 2024[44] and arXiv[77] with the invitation of KDD24.
Loading