Keywords: Gaussian process, doubly stochastic variational inference, variational Inference, Bayesian Inference
Abstract: We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. Remarkably, we find that deep Gaussian processes (DGPs), Bayesian neural networks (BNNs), infinite BNNs, and infinite BNNs with bottlenecks can all be written as deep kernel processes. For DGPs the equivalence arises because the Gram matrix formed by the inner product of features is Wishart distributed, and as we show, standard isotropic kernels can be written entirely in terms of this Gram matrix (we do not need knowledge of the underlying features). We define a tractable deep kernel process, the deep inverse Wishart process and give a doubly-stochastic inducing-point variational inference scheme that operates on the Gram matrices, not on the features (as in DGPs). We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNN on standard fully-connected baselines.
One-sentence Summary: We give a doubly-stochastic variational scheme for deep kernel processes, which are similar to deep Gaussian processes, but operate entirely on Gram matrices, rather than underlying features.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.01590/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=rx-VTeU-FR
10 Replies
Loading