Human-Readable Representation for Graph Neural Networks

ACL ARR 2024 June Submission5772 Authors

16 Jun 2024 (modified: 02 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This research presents an innovative method for representing nodes in graph neural networks (GNNs) using human-readable text in natural language, diverging from the traditional numerical embeddings. By employing a large language model (LLM) as a projector, we train GNNs to aggregate information from neighboring nodes and update node representations iteratively. Our experiments on the MovieLens dataset, widely used for recommendation tasks, demonstrate that human-readable representations effectively capture useful information for recommendations. This suggests that LLMs can successfully aggregate neighborhood information in a graph. Furthermore, fine-tuning the LLMs can improve their ability to generate more application-specific human-readable representations. This technique not only facilitates the incorporation of world knowledge into GNNs but also enhances their interpretability and allows for human intervention in their behavior. Our approach shows significant potential for making graph neural networks more understandable and controllable.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: human readable representation, graph neural network, large language model
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 5772
Loading