Abstract: The rapid spread of rumors on social networks can significantly impact social stability and people’s daily lives. Recently, there has been increasing interest in rumor detection methods based on feedback information generated during user interactions and the propagation structure. However, these methods often face the challenge of limited labeled data. While addressing data dependency issues, graph-based contrastive learning methods struggle to effectively represent different samples of the same class in supervised classification tasks. This paper proposes a novel Supervised Graph Contrastive Regularization (SGCR) approach to tackle these complex scenarios. SGCR leverages label information for supervised contrastive learning and applies simple regularization to the embeddings by considering the variance of each dimension separately. To prevent the collapse problem, sessions belonging to the same class are pulled together in the embedding space, while those from different categories are pushed apart. Experimental results on two real-world datasets demonstrate that our SGCR outperforms baseline methods.
Loading