C2AL: Cohort-Contrastive Auxiliary Learning For Large-Scale Recommendation Systems

ICLR 2026 Conference Submission21568 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recommendation system, Computational advertising, Multi-task learning, Factorization machine
TL;DR: We propose C2AL, an auxiliary learning method leveraging under-represented cohort contrasts to improve attention factorization machines, demonstrating global performance improvement through real-world offline and online tests.
Abstract: Training large-scale recommendation models under a single global objective implicitly assumes homogeneity across user populations. However, real-world data are composites of heterogeneous cohorts with distinct conditional distributions. As models increase in scale and complexity and as more data is used for training, they become dominated by central distribution patterns, neglecting head and tail regions. This imbalance limits the model's learning ability and can result in inactive attention weights or dead neurons. In this paper, we reveal how the attention mechanism can play a key role in factorization machines for shared embedding selection, and propose to address this challenge by analyzing the substructures in the dataset and exposing those with strong distributional contrast through auxiliary learning. Unlike previous research, which heuristically applies weighted labels or multi-task heads to mitigate such biases, we leverage partially conflicting auxiliary labels to regularize the shared representation. This approach customizes the learning process of attention layers to preserve mutual information with minority cohorts while improving global performance. We evaluated C2AL on massive production datasets with billions of data points each for six SOTA models. Experiments show that the factorization machine is able to capture fine-grained user–ad interactions using the proposed method, achieving up to a 0.16\% reduction in normalized entropy overall and delivering gains exceeding 0.30\% on targeted minority cohorts.
Primary Area: interpretability and explainable AI
Submission Number: 21568
Loading