Large Language Model-Guided Disentangled Belief Representation Learning on Polarized Social Graphs

Published: 01 Jan 2024, Last Modified: 06 Feb 2025ICCCN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The paper advances belief representation learning in polarized networks – the mapping of social beliefs espoused by users and posts in a polarized network into a disentangled latent space that separates (the members and beliefs of) each side. Our prior work embeds social interaction data, using non-negative variational graph auto-encoders, into a disentangled latent space. However, the interaction graphs alone may not adequately reflect similarity and/or disparity in beliefs, especially for those graphs with sparsity and outlier issues. In this paper, we investigate the impact of limited guidance from Large Language Models (LLMs) on the accuracy of belief separation. Specifically, we integrate social graphs with LLM-based soft labels as a novel weakly-supervised interpretable graph representation learning framework. This framework combines the strengths of graph-and text-based information, and is shown to maintain the interpretability of learned representations, where different axes in the latent space denote association with different sides of the divide. An evaluation on six real-world Twitter datasets illustrates the effectiveness of the proposed model at solving stance detection problems, demonstrating 5.9%-6.5% improvements in the accuracy, F1 score, and purity metrics, without introducing a significant computational overhead. An ablation study is also discussed to study the impact of different components of the proposed architecture.
Loading