Multiview learning with variational mixtures of Gaussian processesOpen Website

2020 (modified: 13 Nov 2024)Knowl. Based Syst. 2020Readers: Everyone
Abstract: Highlights • We present a framework of multiview learning for mixtures of Gaussian processes. • MvMGPs maximize the posterior distribution of latent variables in each view. • MvMGPs regularize the objective function to learn parameters of different views. • The proposed model outperforms multiple baselines for classification tasks. Abstract Gaussian processes (GPs) are powerful Bayesian nonparametric tools widely used in probabilistic modeling, and the mixture of GPs (MGPs) were introduced afterwards to make data modeling more flexible. However, MGPs are not directly applicable to multiview learning. In order to improve the modeling ability of MGPs, in this paper, we propose a new framework of multiview learning for the MGPs and instantiate it for classification. We make the divergence between views as small as possible while ensuring that the posterior probability of each view is as large as possible. Specifically, we regularize the posterior distribution of latent variables with the consistency of posterior distributions of the latent functions between different views. Since it is intractable to solve the model analytically, the variational inference and optimization algorithms of the classification model are also presented in this paper. Experimental results on multiple real-world datasets have shown that the proposed method has outperformed the original MGP model and several state-of-the-art multiview learning methods, which indicate the effectiveness of the proposed multiview learning framework for MGPs.
0 Replies

Loading