Keywords: gaussian process, independent mechanism analysis, GPLVM, IMA, kernel
TL;DR: Additive and stationary kernels imply that the non-statistical independence formulated by Independent Mechanism Analysis (IMA) holds in GPLVMs and potentially aids learning the true latent factors
Abstract: Independence is a common assumption when modeling generative processes. Independent Mechanism Analysis (IMA) relies on the Independent Causal Mechanisms (ICM) principle to formulate non-statistical independence by measuring the decoder Jacobian’s column- orthogonality. This work is based on observations of the same column-orthogonality in GPLVMs and shows how additive and stationary kernels give rise to independent mechanisms in expectation. To handle the probabilistic nature of GPLVMs, we provide an upper bound for the orthogonality measure under specific kernel conditions. By connecting IMA and GPLVMs, our paper makes the first step to elucidate a useful inductive bias in GPLVMs for recovering the true latent factors, which we demonstrate in our synthetic experiments.