Abstract: In this work, we focus on the multi-view dimensionality reduction problem for tensor data on graphs. In particular, we extend canonical correlation analysis on graphs (CCA-G) and multi-view canonical correlation analysis on graphs (MCCA-G) for tensor data. Directly applying CCA-G and MCCA-G to tensor data requires vectorization, which destroys the underlying structure in the data and often outputs very high-dimensional data leading to the curse of dimensionality. To circumvent the vectorization operation, we propose tensor canonical correlation analysis on graphs (TCCA-G) for two view data and tensor multi-view canonical correlation analysis on graphs (TMCCA-G) for multi-view tensor data that preserves the intrinsic structure in data and accounts for underlying graph structure in the latent variable. In particular, the proposed TCCA-G promotes smoothness of the tensor canonical variates over a graph and outputs the tensor canonical variates that are correlated within the set and uncorrelated across the sets. In the absence of prior (smoothness) information on the latent variable, TCCA-G simplifies to tensor canonical correlation analysis (TCCA) that only preserves the intrinsic structure in the data and results in an uncorrelated set of features. To solve TCCA-G and TCCA, we present an algorithm based on alternating minimization. In particular, the canonical subspaces in TCCA and TCCA-G are obtained by solving an eigenvalue problem. TMCCA-G extends TCCA-G to multi-view data, wherein TMCCA-G obtains the canonical subspaces by solving a simple least-squares problem and the common source is obtained recursively using a Crank-Nicolson-like update to preserve the orthonormality constraints. In the absence of the graph prior, we present tensor multi-view canonical correlation analysis (TMCCA), in which the common source is obtained in closed-form by solving an orthogonal Procustes problem. Therefore, each subproblem in TMCCA admits a closed-form solution in contrast to TMCCA-G. We show the efficacy of proposed algorithms through experiments on diverse tasks such as classification and clustering on real datasets.
Loading