Keywords: information retrieval, negative sampling, in-batch sampling, out-batch sampling, two-tower model, popularity bias, Large language model (LLM)
Abstract: The two-tower model has been widely used for large-scale recommendation systems, particularly in the retrieval stage. Industry standards for training two-tower models typically involve in-batch and/or out-of-batch negative sampling. However, these methods often produce easy negatives that models can quickly learn, failing to sufficiently challenge the model. To address this issue, we propose a novel self-supervised hard negative sampling technique that leverages a large language model (LLM) to generate hard negatives from the same cluster during model training. By utilizing the LLM to learn media representations, our approach ensures that the generated negatives are more challenging and informative. This real-time sampling framework is designed for seamless integration into production models, capable of handling billions of training data points with minimal computational complexity. Experiments on public datasets, along with deployment to a vast number of online users, demonstrate that our negative sampling technique outperforms widely used industry methods. Furthermore, our analysis in industrial applications reveals that this sampling method can help break inherent feedback loops in recommendations and significantly reduce popularity bias.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 3807
Loading