Neural Causal Regularization under the Independence of Mechanisms AssumptionDownload PDF

20 Apr 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: Neural networks provide a powerful framework for learning the association between input and response variables and making accurate predictions. However, in many applications such as healthcare, it is important to identify causal relationships between the inputs and the response variables to be able to change the response variables by intervention on the inputs. In pursuit of models whose predictive power comes maximally from causal variables, we propose a novel causal regularizer based on the independence of mechanisms assumption. We utilize the causal regularizer to steer deep neural network architectures towards causally-interpretable solutions. We perform a large-scale analysis of electronic health records. Employing expert's judgment as the causal ground-truth, we show that our causally-regularized algorithm outperforms its L1-regularized equivalence both in predictive performance as well as causal relevance. Finally, we show that the proposed causal regularizer can be used together with representation learning algorithms to yield up to 20% improvement in the causality score of the generated hypotheses.
TL;DR: We designed a neural causal regularizer to encourage predictive models to be more causal.
Conflicts: gatech.edu, caltech.edu, usc.edu, cmu.edu, ibm.com, ed.ac.uk
Keywords: Deep learning, Applications
19 Replies

Loading