User Response Modeling in Reinforcement Learning for Ads Allocation

Published: 01 Jan 2024, Last Modified: 11 Mar 2025WWW (Companion Volume) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: User response modeling can enhance the learning of user representations and further improve the reinforcement learning (RL) recommender agent. However, as users' behaviors are influenced by their long-term preferences and short-term stochastic factors (e.g., weather, mood, or fashion trends), it remains challenging for previous works focusing on recurrent neural network-based user response modeling. Meanwhile, due to the dynamic interests of users, it is often unrealistic to assume the dynamics of users are stationary. Drawing inspiration from opponent modeling, we propose a novel network structure, Deep User Q-Network (DUQN), incorporating a user response probabilistic model into the Q-learning ads allocation strategy to capture the effect of the non-stationary user policy on Q-values. Moreover, we utilize the Recurrent State-Space Model (RSSM) to develop the user response model, which includes deterministic and stochastic components, enabling us to fully consider user long-term preferences and short-term stochastic factors. In particular, we design a RetNet version of RSSM (R-RSSM) to support parallel computation. The R-RSSM model can be further used for multi-step predictions to enable bootstrapping over multiple steps simultaneously. Finally, we conduct extensive experiments on a large-scale offline dataset from the Meituan food delivery platform and a public benchmark. Experimental results show that our method yields superior performance to state-of-the-art (SOTA) baselines. Moreover, our model demonstrates a significant improvement in the online A/B test and has been fully deployed on the industrial Meituan platform, serving more than 500 million customers.
Loading