Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Analysis of attention distribution and windowing reveals phenomena such as the NLP-inspired "attention sink" and a novel "Skewed Line Sink" in graphs.
Abstract: Attention mechanisms are critical to the success of large language models (LLMs), driving significant advancements in multiple fields. However, for graph-structured data, which requires emphasis on topological connections, they fall short compared to message-passing mechanisms on fixed links, such as those employed by Graph Neural Networks (GNNs). This raises a question: ``Does attention fail for graphs in natural language settings?'' Motivated by these observations, we embarked on an empirical study from the perspective of attention mechanisms to explore how LLMs process graph-structured data. The goal is to gain deeper insights into the attention behavior of LLMs over graph structures. Through a series of experiments, we uncovered unique phenomena regarding how LLMs apply attention to graph-structured data and analyzed these findings to improve the modeling of such data by LLMs. The primary findings of our research are: 1) While LLMs can recognize graph data and capture text-node interactions, they struggle to model inter-node relationships within graph structures due to inherent architectural constraints. 2) The attention distribution of LLMs across graph nodes does not align with ideal structural patterns, indicating a failure to adapt to graph topology nuances. 3) Neither fully connected attention (as in LLMs) nor fixed connectivity (as in GNNs) is optimal; each has specific limitations in its application scenarios. Instead, intermediate-state attention windows improve LLM training performance and seamlessly transition to fully connected windows during inference. Source code: \href{https://anonymous.4open.science/r/LLM_exploration-B21F}{anonymous.4open.science/LLM\_exploration-B21F}
Lay Summary: We find that although LLMs can gradually become aware of graph data during training, they do not properly utilize the connectivity information within the graph. Subsequently, we analyze and explain this from two main aspects: attention distribution and attention window. Our findings indicate that graph data also exhibits phenomena similar to the "attention sink" observed in the NLP domain, as well as a unique phenomenon in graph data called "Skewed Line Sink," both of which interfere with how LLMs allocate attention within the graph. The fully connected attention window and fixed connection window used in GNNs are not suitable for LLMs.
Primary Area: Deep Learning->Large Language Models
Keywords: large language model, graph, attention perpective
Submission Number: 2960
Loading