Unleashing the Potential of Text-attributed Graphs: Automatic Relation Decomposition via Large Language Models

25 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models (LLM), pretrained language models (PLM), text attributed graphs (TAG), graph neural networks (GNN)
Abstract: Text-attributed graphs (TAGs) integrate textual information with graph structures, offering unique opportunities for leveraging language models to enhance node feature quality. However, our extensive analysis reveals that the downstream task performance on TAGs is hindered by the graph structure itself; treating diverse semantics(e.g. “advised by”, “participates in”) as a singular relation (e.g. hyperlinks). By decomposing conventional edges into distinctive semantic relations, we discover significant improvement in GNNs’ downstream task performance. Motivated by this, we present **RoSE** (Relation-oriented Semantic Edge-decomposition), a novel framework that leverages large language models (LLMs) to automatically decompose graph structures into different semantic relations without requiring expensive human labelling or domain expertise. **RoSE** consists of two stages: (1) identifying semantic relations via an LLM-based generator and discriminator, and (2) decomposing each edge into corresponding relations by analyzing raw textual contents associated with connected nodes via an LLM-based decomposer. The decomposed edges provided by our framework can be applied in a model-agnostic, plug-and-play manner, enhancing its versatility. Moreover, **RoSE** achieve state-of-the-art node classification results on various benchmarks and GNN architectures.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4873
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview