Abstract: A growing literature has focused on representation learning as it relates to the estimation and analysis of heterogeneous treatment effects in both experimental and observational settings. We are specifically interested in the estimation of conditional average treatment effects (CATE) functions, i.e. functions mapping the effect of a binary treatment to the space of unit-level covariates.
In the absence of a controlled randomized mechanism of treatment assignment, simple comparisons between treated and control populations can be potentially confounded by significant distributional differences in the covariate space. In this context, recent representation learning strategies aim to learn balanced latent representations in a new space where the treated and control distributions are more comparable, addressing confounding and reducing global bias.
In this work, we leverage self-supervision and contrastive learning and propose a novel contrastive loss function that structures the latent space according to the similarity of estimated individual treatment effects (ITE). We integrate this contrastive learning approach in HERMES (Heterogeneous Effects Representation with Matched Embeddings using Siamese networks), a Siamese Neural Network which learns a structured latent space by dynamically pairing samples whose estimated ITEs are similar.
Unlike representation learning approaches that rely only on covariates, HERMES injects the ITE into representation learning, improving accuracy under standard assumptions. Experiments on IHDP and JOBS benchmarks show that HERMES improves the expected Precision in MSE by 14–15% over baselines, without added inference cost.
Submission Type: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=4K2iTXOK3h
Changes Since Last Submission: The desk reject was motivated by some formatting issues in the paper. We identified the problem: the use of "\usepackage{geometry}" unexpectedly changed the paper's margins. Furthermore, we used specific anonymization tools to avoid problems related to the anonymization of the repository containing developed source code and files produced during the experiment.
Assigned Action Editor: ~Stefan_Feuerriegel1
Submission Number: 7069
Loading