Tighter Bounds on Bias Estimation in Doubly Robust Estimators

Published: 21 Jun 2025, Last Modified: 19 Aug 2025IJCAI2025 workshop Causal Learning for Recommendation SystemsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Doubly Robust estimators, bias overestimation, bias correction
TL;DR: A conservative bias relaxation based on Lagrange's identity reduces bias overestimation of DR estimators.
Abstract: Recommender systems have become ubiquitous in personalized service platforms, yet their performance suffers from selection bias—a systemic distortion arising from non-random missing ratings where users preferentially engage with preferred items. While Doubly Robust (DR) estimators have emerged as a dominant solution by concurrently addressing bias and variance, recent studies reveal that conventional bias relaxation techniques adopt excessively coarse approximations, leading to significant overestimation of model bias. This work introduces a novel conservative bias relaxation framework that derives tighter error bounds through theoretical analysis with Lagrange's Identity, and empirically validates lower bias overestimation on an ML100K-based semi-synthetic dataset. The effectiveness of bias correction in practical algorithms is systematically validated on two real-world datasets.
Submission Number: 16
Loading