Multitask-Based Self-Supervised Learning for Recommendation in Social Systems

Published: 2025, Last Modified: 15 Jan 2026IEEE Trans. Comput. Soc. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In computational social systems, recommendation functionality plays a pivotal role in influencing user behavior, enhancing user experience, and driving engagement. To help recommendation functionality to better suggest relevant content or items, the social platforms usually utilize large-scale knowledge discovery techniques to analyze trends in user interactions and extract patterns from large datasets. Click-through rate (CTR) prediction is crucial in recommendation systems for measuring effectiveness, understanding user behavior, training and optimizing models, impacting business outcomes, enhancing personalization, and identifying issues. It provides actionable insights that assist in continuously refining and improving the recommendation process. Traditional deep learning-based CTR prediction models cannot work well for recommendation in social systems due to the data sparsity and the long-tail data problems since the representation learned from the user behavior is basically dominated by the major part of the data. In this article, we propose a multitask-based self-supervised learning model (MTSSL) that can better deal with sparse and long-tail user interaction data. Specifically, we first transform the CTR prediction task into the multitask joint learning framework with a set of shared subnetworks. Each subnetwork learns a representation of the entire user data, and hence, the sparse and long-tail data would have opportunity to fall into the best matched representation space of historical user behavior. Moreover, two kinds of self-supervision signals are employed to guide the learning of the representations. Extensive experiments over four user interaction datasets demonstrate the superiority of our proposed MTSSL over state-of-art models for recommendations. In terms of online A/B test, our model achieves around 3% better performance than the counterparts.
Loading