Abstract: Matrix manifolds play a fundamental role in machine learning, underpinning data representations (e.g., linear subspaces and covariance matrices) and optimization procedures. These manifolds adhere to Riemannian geometry, where intrinsic curvature significantly impacts the performance of geometric learning algorithms. However, traditional visualization methods based on Euclidean assumptions disregard curvature information, leading to distortions of the underlying non-Euclidean structure. To address this limitation, we generalize the popular t-SNE paradigm to the context of Riemannian manifolds and apply it to three types of matrix manifolds, which are the Grassmann manifolds, Correlation manifolds, and Symmetric Positive Semi-Definite (SPSD) manifolds, respectively. By constructing a probability distribution mapping between the original and target spaces, our method transforms high-dimensional manifold-valued data points into low-dimensional ones, preserving curvature information and avoiding distortion caused by Euclidean flattening. This work provides a foundation for general-purpose dimensionality reduction of high-dimensional matrix manifolds. Extensive experimental comparisons with existing visualization methods across synthetic and benchmarking datasets demonstrate the efficacy of our proposal in preserving geometric properties of the data.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Bamdev_Mishra1
Submission Number: 6937
Loading