Learning Joint Interventional Effects from Single-Variable Interventions in Additive Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We show that joint interventional effects are identifiable under confounding from observational and single-variable interventions when the contributions of each action to the outcome are nonlinear but additive.
Abstract: Estimating causal effects of joint interventions on multiple variables is crucial in many domains, but obtaining data from such simultaneous interventions can be challenging. Our study explores how to learn joint interventional effects using only observational data and single-variable interventions. We present an identifiability result for this problem, showing that for a class of nonlinear additive outcome mechanisms, joint effects can be inferred without access to joint interventional data. We propose a practical estimator that decomposes the causal effect into confounded and unconfounded contributions for each intervention variable. Experiments on synthetic data demonstrate that our method achieves performance comparable to models trained directly on joint interventional data, outperforming a purely observational estimator.
Lay Summary: Understanding how multiple actions work together to influence an outcome is crucial in many fields, from marketing campaigns to medical treatments. However, running experiments that test every possible combination of actions is often prohibitively expensive and time-consuming—the number of required experiments grows exponentially with each additional variable. We developed a mathematical approach that allows researchers to predict the effects of combining multiple interventions using simpler experiments where only one variable is changed at a time, plus observational data. Our method works when the outcome can be understood as a sum of separate contributions from each action, even if those individual contributions are complex and nonlinear. We created a practical algorithm that decomposes causal effects into components that can be learned from these simpler data sources. Our approach could dramatically reduce experimental costs across many domains. A company optimizing marketing across multiple channels could understand joint effects without testing every channel combination. Medical researchers could predict how multiple treatments work together without running every possible clinical trial. By making it possible to learn about complex multi-variable effects from simpler experiments, this work enables more efficient and cost-effective decision-making in situations where comprehensive experimentation would be impractical.
Link To Code: https://github.com/akekic/intervention-generalization.git
Primary Area: General Machine Learning->Causality
Keywords: Causality, Treatment Effect, Causal Representation Learning
Submission Number: 9706
Loading