Information-Theoretic Causal Bounds under Unmeasured Confounding

Published: 10 Mar 2026, Last Modified: 07 Apr 2026CLeaR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: partial identification, unmeasured confounding, information-theoretic bounds, conditional causal effects, semiparametric estimation
TL;DR: We develop sharp, data-driven causal bounds for conditional interventional effects under unmeasured confounding.
Abstract: We develop a data-driven information-theoretic framework for the sharp partial identification of causal effects under unmeasured confounding. Existing approaches often rely on restrictive assumptions, such as bounded or discrete outcomes, require external inputs (e.g., instrumental variables, proxies, or user-specified sensitivity parameters), necessitate full structural causal model specifications, or focus solely on population-level averages while neglecting covariate-conditional treatment effects. We overcome all four limitations simultaneously by establishing novel information-theoretic, data-driven divergence bounds. Our key theoretical contribution establishes that the $f$-divergence between the observational distribution $P(Y \mid A=a, X=x)$ and the interventional distribution $P(Y \mid \mathrm{do}(A=a), X=x)$ is upper bounded by a function of the propensity score alone. This result enables sharp partial identification of conditional causal effects directly from observational data, without requiring external sensitivity parameters, auxiliary variables, full structural specifications, or outcome boundedness assumptions. For practical implementation, we develop a semiparametric estimator satisfying Neyman orthogonality, which enables $\sqrt{n}$-consistent inference even when nuisance functions are estimated via flexible machine learning methods. Simulation studies and real-world data applications demonstrate that our framework provides tight and valid causal bounds across a wide range of data-generating processes.
Pmlr Agreement: pdf
Submission Number: 98
Loading