Abstract: Graph convolutional networks (GCNs) and graph neural networks (GNNs) have demonstrated convincing performance on many tasks by learning the intrinsic structure of the data. However, it is still valuable and challenging to consider the complex and complete correlations of objects, i.e., high-order manifold structures, for representation learning. In this paper, we present a novel representation learning method that utilizes the optimized high-order manifold of the data for classification tasks of nonstructural data and graph-structure data. In the method, we fully explore the complicated relationship of samples by highlighting the high-order manifold information in a hypergraph. Specifically, we incorporate high-order manifold information by graph <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> -Laplacian into a hypergraph and propose <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> -Laplacian-based hypergraph neural networks (pLapHGNN) to significantly learn hidden layer representations that encode both the high-order structure of data and the high-order manifold geometrical information. Confronting the difficulties of obtaining optimized high-order manifolds of the data, we propose an effective approximate approach by graph <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> -Laplacian representing the relationship of hyperedges in the hypergraph. Furthermore, we study the weights of hyperedges in a hypergraph with high-order manifold information. Experiments on the ModelNet40 dataset and NTU dataset demonstrate that the proposed method is more effective than the other popular methods for 3D shape recognition. Extensive experiments on other visual classification tasks and citation networks also show the superiority of our proposed method for representation learning.
0 Replies
Loading