Efficient Differentiable Discovery of Causal Order

Published: 06 Mar 2025, Last Modified: 17 Mar 2025SCSL @ ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Track: regular paper (up to 6 pages)
Keywords: regularisation, causality, generalisation
TL;DR: We introduce DiffIntersort, a differentiable approach to causal ordering that enables scalable causal discovery and can be integrated as a regularizer in deep learning models to mitigate spurious correlations.
Abstract: Spurious correlations arise when AI models capture statistical dependencies that do not reflect the true causal structure of the underlying reality, leading to unreliable predictions and unsafe decision-making, particularly in high-stakes domains. While causal discovery methods exist to infer causal structure from data, many are computationally expensive and non-differentiable, limiting their integration into modern AI systems. In this work, we introduce a differentiable approach to causal ordering that allows causal discovery to be seamlessly incorporated as a module within existing machine learning pipelines. Our method builds upon Intersort (Chevalley et al., 2025), a score-based algorithm for discovering causal order in Directed Acyclic Graphs (DAGs) using interventional data. To enable differentiable optimization, we develop a continuous relaxation of Intersort using differentiable sorting and ranking techniques, allowing causal constraints to be directly integrated into gradient-based learning frameworks. By incorporating causal discovery as a regularizer, our approach encourages models to rely on causal relationships rather than spurious correlations, ultimately improving their robustness and trustworthiness when actions are taken based on the learned model. Empirical results demonstrate that enforcing causal order as an inductive bias enhances model generalization and interpretability, making AI systems more reliable and safer for real-world deployment.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Mathieu_Chevalley1
Format: Yes, the presenting author will definitely attend in person because they attending ICLR for other complementary reasons.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Submission Number: 30
Loading