Abstract: Graph neural networks (GNNs) have become a dominant
modeling paradigm for graph-structured data, and the emergence of large language models (LLMs) has spurred growing interest in integrating external semantic knowledge into
GNNs. Current LLM-based GNNs are devoted to extracting
semantically similar information from LLMs to enhance representation learning. However, they generally overlook key
signals that are semantically dissimilar but exhibit stronger
inter-class discriminative ability. Especially when the original graph data contains noise or semantic ambiguity, a single similarity-based semantic augmentation strategy not only
fails to provide effective enhancement, but may also amplify
misleading signals generated by the LLM in response to lowquality inputs or its own hallucinations, further degrading
the discriminative power and robustness of GNNs. To this
end, we propose a dual positive-negative knowledge extraction strategy based on LLMs, and integrate it with a knowledge distillation mechanism to dynamically transfer multidimensional enhanced signals to GNNs, thereby achieving
fine-grained and robust graph representation learning. Specifically, we design personalized prompts to guide LLMs in generating semantically similar positive signals and semantically
dissimilar negative signals, which help the model capture
intra-class consistency and inter-class distinction. Then, we
further generate structural and semantic reasoning as supplementary knowledge to support the rationality and guidance of
supervision signals. To identify high-confidence transferred
knowledge, we introduce a language-based evaluation mechanism to filter low-confidence or hallucinated outputs. Finally, under a unified distillation framework, our method uses
both positive and negative knowledge to guide GNN training,
achieving adaptive and robust representation learning. Extensive experiments on benchmark datasets verify the superior
performance of our approach across various tasks.
Loading