Learning Causal Representations with Granger PCADownload PDF

Published: 09 Jul 2022, Last Modified: 05 May 2023CRL@UAI 2022 PosterReaders: Everyone
Keywords: causal representation, principal components, PCA, canonical correlation, causal discovery, time series, partial least squares, PLS, CCA
Abstract: Learning causal feature representations helps us identify relevant subspaces to express the signal of interest and understand (and imagine interventions on) the underlying causal mechanisms. In this work, we adopt a rather pragmatic standpoint and propose learning Granger-causal feature representations with a simple additional rotation on top of the classical Principal Component Analysis (PCA). We generalize the methodology to nonlinear Granger causal representations with kernel PCA, give empirical proof of performance in linear and nonlinear toy examples, and find the relevant problem of finding Granger-causal feature long-range spatio-temporal teleconnections in the Earth system. The methodology can be of practical convenience in high-dimensional and low-sample sized problems.
4 Replies

Loading