Neural Causal Regularization under the Independence of Mechanisms Assumption

Mohammad Taha Bahadori, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, Jimeng Sun

Nov 04, 2016 (modified: Feb 06, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Neural networks provide a powerful framework for learning the association between input and response variables and making accurate predictions. However, in many applications such as healthcare, it is important to identify causal relationships between the inputs and the response variables to be able to change the response variables by intervention on the inputs. In pursuit of models whose predictive power comes maximally from causal variables, we propose a novel causal regularizer based on the independence of mechanisms assumption. We utilize the causal regularizer to steer deep neural network architectures towards causally-interpretable solutions. We perform a large-scale analysis of electronic health records. Employing expert's judgment as the causal ground-truth, we show that our causally-regularized algorithm outperforms its L1-regularized equivalence both in predictive performance as well as causal relevance. Finally, we show that the proposed causal regularizer can be used together with representation learning algorithms to yield up to 20% improvement in the causality score of the generated hypotheses.
  • TL;DR: We designed a neural causal regularizer to encourage predictive models to be more causal.
  • Keywords: Deep learning, Applications
  • Conflicts: gatech.edu, caltech.edu, usc.edu, cmu.edu, ibm.com, ed.ac.uk

Loading