Exogenous Isomorphism for Counterfactual Identifiability

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper investigates $\sim_{\mathcal{L}\_3}$-identifiability, a form of complete counterfactual identifiability within the Pearl Causal Hierarchy (PCH) framework, ensuring that all Structural Causal Models (SCMs) satisfying the given assumptions provide consistent answers to all causal questions. To simplify this problem, we introduce exogenous isomorphism and propose $\sim_{\mathrm{EI}}$-identifiability, reflecting the strength of model identifiability required for $\sim_{\mathcal{L}\_3}$-identifiability. We explore sufficient assumptions for achieving $\sim_{\mathrm{EI}}$-identifiability in two special classes of SCMs: Bijective SCMs (BSCMs), based on counterfactual transport, and Triangular Monotonic SCMs (TM-SCMs), which extend $\sim_{\mathcal{L}\_2}$-identifiability. Our results unify and generalize existing theories, providing theoretical guarantees for practical applications. Finally, we leverage neural TM-SCMs to address the consistency problem in counterfactual reasoning, with experiments validating both the effectiveness of our method and the correctness of the theory.
Lay Summary: Causal models can answer hypothetical “what-if” questions, but different models may yield different answers—a phenomenon known as the counterfactual identification problem. This inconsistency makes it difficult for researchers and decision-makers to know which predictions to trust. To address this challenge, we introduce the concept of exogenous isomorphism, which aligns the latent components of different models so that they produce consistent answers to every “what-if” query. We then identify sufficient assumptions that guarantee this alignment across two well-studied model families. Finally, we demonstrate the practical feasibility of our approach by implementing it with neural networks and validating its performance on simulated datasets. Guaranteeing that all models constructed under the same assumptions produce identical answers enhances the reliability of counterfactual reasoning. This consistency is crucial for domains such as healthcare, economics, and policymaking, where trustworthy “what-if” analyses underpin sound decisions.
Link To Code: https://github.com/cyisk/tmscm
Primary Area: General Machine Learning->Causality
Keywords: causality, causal inference, counterfactual identifiability
Submission Number: 10453
Loading