Cache-Enhanced InBatch Sampling with Difficulty-Based Replacement Strategies for Learning Recommenders

Published: 01 Jan 2023, Last Modified: 02 Aug 2024DASFAA (Workshops) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Negative sampling techniques are prevalent in learning recommenders to reduce the computational cost over the entire corpus, but existing methods still have a significant overhead for re-encoding out-of-batch items. Inbatch sampling is a more practical strategy that regards items in the mini-batch as negatives, although it suffers from exposure bias. Several researchers attempt to alleviate the bias by cache mechanism, which supplements more items for better approximation, but none of them sufficiently evaluate the information level of different items and further exploit them. In this paper, we propose a Cache-Enhanced InBatch Sampling with Difficulty-Based Replacement Strategy for Learning Recommenders that heuristically and adaptively updates the cache depending on the designed training difficulty of negative samples. Specifically, the cache is updated based on the average and standard deviation with respect to the training difficulty, which correspond with the estimated first-order and second-order moments, in which way the items with high averages and high uncertainties have a higher probability of being restored. Thus, the historical informative items in training are more effectively explored and exploited, leading to superior and rapid convergence. The proposed DBRS is evaluated on four real-world datasets and outperforms the existing state-of-the-art approaches.
Loading