Keywords: Graph learning; LLM-enhanced; Rumor detection; Virtual node.
TL;DR: This paper proposes a novel framework for rumor detection on social media by integrating Large Language Models with Graph Neural Networks.
Abstract: The proliferation of rumors on social networks poses significant challenges to information credibility and public trust. The dissemination of rumors forms complex networks, yet existing rumor detection methods exhibit several limitations, including a limited capacity to capture complex propagation features. Representing each node solely through its textual embeddings neglects the textual coherence across the entire rumor propagation path, which undermines the accuracy of rumor identification on social platforms. To address these challenges, this study proposes a novel framework for rumor detection on social media, which captures latent characteristics of rumor propagation and enhances contextual correlation within rumor graphs through large language models (LLMs). We introduce a novel paradigm for effectively leveraging LLMs, utilizing their powerful linguistic capabilities to analyze complete information flows within sub-chains, assign rumor probabilities, and guide the construction of connections between virtual nodes and selected sub-chain nodes. This enables the modification of the original graph structure, which is a critical advancement for capturing subtle rumor signals. Given the inherent limitations of LLMs in rumor identification, we develop a structured prompt framework to mitigate model biases and ensure robust graph learning performance. Additionally, the proposed framework is model-agnostic, meaning it is not constrained to any specific graph learning algorithm or LLMs. Its plug-and-play nature allows for seamless integration with further fine-tuned LLMs and graph techniques in the future, potentially enhancing predictive performance without modifying original algorithms.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 24134
Loading