Abstract: Fake news detection on social media is crucial to purifying the online environment and protecting public safety. Many existing methods explore the news propagation structures through graph neural networks (GNNs) to determine the truthfulness of news. End-to-end supervised GNNs notoriously depend on large amounts of labels. Recently, self-supervised graph pretraining has been a promising solution to alleviate the dependence on labels. However, the application of graph pretraining in fake news detection still suffers from two challenges: 1) the missing and unreliable interactions intrinsic in the news propagation structures seriously damage the pretraining performance. 2) There is an inherent gap between pretraining and downstream fake news detection tasks due to inconsistency in optimization objectives, which hinders the efficient transfer of pretrained prior knowledge and causes suboptimal detection results. To address the above two challenges, we propose RGCP, a structure redefined graph pretraining with contrastive prompting for fake news detection. Specifically, we design a propagation structure refinement module that adds potential implicit interactions and removes noisy interactions according to the connection probabilities between posts estimated under the guidance of self-supervised contrastive learning. Thereby, the redefined structures provide reliable news propagation patterns to generate robust pretrained news representations. Moreover, we propose a novel prompt tuning based on the contrastive learning module that reformulates the downstream fake news detection task in a similar form as the graph contrastive pretraining, bridging the optimization objective gap. The extensive experiments on benchmark datasets demonstrate the superiority of RGCP, achieving an average improvement of 10.15% in few-shot classification.
External IDs:dblp:journals/tcss/WangTZSXZYW25
Loading