CGraphNet: Contrastive Graph Context Prediction for Sparse Unlabeled Short Text Representation Learning on Social Media
Abstract: Unlabeled text representation learning (UTRL), encompassing static word embeddings such as Word2Vec and contextualized word embeddings such as bidirectional encoder representations from transformer (BERT), aims to capture semantic word relationships in a low-dimensional space without the need for manual labeling. These word embeddings are invaluable for downstream tasks such as document classification and clustering. However, the surge of short texts generated daily on social media platforms results in sparse word cooccurrences, compromising UTRL outcomes. Contextualized models such as recurrent neural network (RNN) and BERT, while impressive, often struggle with predicting the next word due to sparse word sequences in short texts. To address this, we introduce CGraphNet, a contrastive graph context prediction model designed for UTRL. This approach converts short texts into graphs, establishing links between sequentially occurring words. Information from the next word and its neighbors informs the target prediction, a process referred to as graph context prediction, mitigating sparse word cooccurrence issues in brief sentences. To minimize noise, an attention mechanism assigns importance to neighbors, while a contrastive objective encourages more distinctive representations by comparing the target word with its neighbors. Our experiments demonstrate CGraphNet's superior performance over other baselines, particularly in classification and clustering tasks on real-world datasets.
External IDs:dblp:journals/tcss/ChenGLWXGZL25
Loading