GraphText: Graph Reasoning in Text Space

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Node Classification; Large Language Models
TL;DR: We propose GraphText, a language that converts graph tasks to text reasoning problem using graph-syntax-tree. GraphText-ChatGPT surpasses supervised baselines in a training-free manner.
Abstract: Large Language Models (LLMs) have brought transformative changes across various domains by excelling in generative natural language processing. Despite their success, LLMs have not made significant advancements in the realm of graph machine learning. This limitation arises because graphs encapsulate distinct non-Euclidean structures, making it challenging to transform them into natural language that LLMs understand. In this paper, we bridge this gap with a novel framework, GraphText, that translates graphs to natural language. GraphText constructs a graph-syntax tree for each graph, capturing both node features and the complex relationships between nodes. By traversing these trees, a text sequence with structure semantics is produced, enabling LLMs to approach graph reasoning as a text generation problem. Notably, GraphText presents several key benefits. It introduces {\em training-free graph reasoning}: even without training on graph data, GraphText with ChatGPT can achieve on par with, or even surpass, the performance of supervised-trained graph neural networks through in-context learning. Furthermore, GraphText paves the way for {\em interactive graph reasoning}, allowing both humans and LLMs to communicate with the model seamlessly using natural language. These capabilities underscore the vast, yet-to-be-explored potential of LLMs in graph machine learning.
Submission Number: 111
Loading