CBPL: A Unified Calibration and Balancing Propensity Learning Framework in Causal Recommendation for Debiasing

Published: 21 Jun 2025, Last Modified: 19 Aug 2025IJCAI2025 workshop Causal Learning for Recommendation SystemsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Debiased recommendation, propensity learning, calibration, balancing
Abstract: In recommender systems, observed data always suffer from Missing-Not-At-Random (MNAR) issue: users rate only a non-random subset of items, leading to a biased recommendation if the model is trained on such biased data directly. One type of popular debiasing method is to learn an accurate propensity score (the probability a rating is observed) and then reweight the observed sample to achieve unbiased rating prediction. While calibration metric and balancing metric are widely adopted as constraints to learn a high-quality propensity model, existing methods optimize these objectives in an isolated manner, neglecting their inherent connections. To bridge this gap, we first decompose the balancing constraint, making the balancing loss and the calibration loss have a similar form. Then we propose a unified Calibration and Balancing Propensity Learning (CBPL) framework that minimizes calibration loss and balancing loss simultaneously. In addition, we provide a theoretical analysis showing that our method has a variance reduction property. Experimental results on three real-world recommendation datasets demonstrate that our method can outperform the state-of-the-art baselines.
Submission Number: 17
Loading