The Adaptive Doubly Robust Estimator and a Paradox Concerning Logging PolicyDownload PDF

21 May 2021, 20:45 (edited 25 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Doubly robust estimator, Double/debiased machine learning, Causal inference, Semiparametric efficiency, Dependent samples, Adaptive experiments
  • Abstract: The doubly robust (DR) estimator, which consists of two nuisance parameters, the conditional mean outcome and the logging policy (the probability of choosing an action), is crucial in causal inference. This paper proposes a DR estimator for dependent samples obtained from adaptive experiments. To obtain an asymptotically normal semiparametric estimator from dependent samples without non-Donsker nuisance estimators, we propose adaptive-fitting as a variant of sample-splitting. We also report an empirical paradox that our proposed DR estimator tends to show better performances compared to other estimators utilizing the true logging policy. While a similar phenomenon is known for estimators with i.i.d. samples, traditional explanations based on asymptotic efficiency cannot elucidate our case with dependent samples. We confirm this hypothesis through simulation studies.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code:
11 Replies