TaCL-CoMoE: Task-adaptive Contrastive Learning with Cooperative Mixture of Experts for Multi-task Social Media Analysis

ACL ARR 2025 July Submission133 Authors

24 Jul 2025 (modified: 19 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Social media has become a crucial platform for information dissemination and opinion expression. The massive and continuous generation of user content has given rise to various natural language processing tasks, such as sentiment analysis and topic classification. However, existing mainstream approaches typically focus on modeling individual tasks in isolation, lacking systematic exploration of collaborative modeling across multiple tasks. This neglects the inherent correlations among social media tasks, thereby limiting the model’s ability to fully comprehend and exploit the rich, multi-dimensional semantic information embedded in text. To address this challenge, we propose $\textbf{Ta}$sk-adaptive $\textbf{C}$ontrastive $\textbf{L}$earning with $\textbf{Co}$operative $\textbf{M}$ixture $\textbf{o}$f $\textbf{E}$xperts ($\textbf{TaCL-CoMoE}$), a unified framework for social media multi-task learning. Specifically, we improve the gating mechanism by replacing the traditional softmax routing with sigmoid activation, enabling cooperative selection among multiple experts and mitigating the ``expert monopoly'' phenomenon. In addition, we introduce a task-adaptive contrastive learning strategy to further enhance the model’s ability to capture and distinguish semantic structures across different tasks. Experimental results on multiple public social media datasets demonstrate that TaCL-CoMoE consistently achieves state-of-the-art (SOTA) performance. The code is available at https://anonymous.4open.science/r/TaCL-CoMoE.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Social media analysis,Multi-task learning,Mixture of experts,Contrastive learning
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 133
Loading