Keywords: Text-attributed Graphs
Abstract: Many graphs can be represented as Text-attributed Graphs (TAGs). Due to the rich textual information present in each node of TAGs, traditional graph neural networks (GNNs) often struggle to deliver satisfactory performance. Recent advancements leveraging large language models (LLMs) to augment new node text features have notably enhanced node representations, resulting in significant performance improvements. However, these methods typically require extensive annotations or fine-tuning on all nodes, which are both time-consuming and expensive. To address this challenge, we propose GAGA, a novel and lightweight framework for TAG representation learning. GAGA employs a more efficient strategy by annotating only representative nodes and edges, thereby reducing both annotation time and cost. It further capitalizes on these annotations by constructing an annotation graph that captures the topological relationships among them. Additionally, GAGA introduces a two-level alignment module to integrate the annotation graph with the TAG, ensuring effective alignment of their underlying structures. Experiments demonstrate that GAGA achieves classification accuracies comparable to or exceeding state-of-the-art methods while requiring only 1\% of the data to be annotated, making it highly efficient.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3159
Loading