Abstract: Spectral Embedding (SE) is a popular method for dimensionality reduction, applicable across diverse domains. Nevertheless, its current implementations face three prominent drawbacks which curtail its broader applicability: generalizability (i.e., out-of-sample extension), scalability, and eigenvectors separation.
In this paper, we introduce $\textit{sep-SpectralNet}$ (eigenvector-separated SpectralNet), a novel deep-learning approach for generalizable and efficient approximate spectral embedding, designed to address these limitations.
sep-SpectralNet incorporates an efficient post-processing step to achieve eigenvectors separation, while ensuring both generalizability and scalability, allowing for the computation of the Laplacian’s eigenvectors on unseen data. This method expands the applicability of SE to a wider range of tasks and can enhance its performance in existing applications.
We empirically demonstrate sep-SpectralNet's ability to consistently approximate and generalize SE, while ensuring scalability. Additionally, we show how sep-SpectralNet can be leveraged to enhance existing methods. Specifically, we focus on UMAP, a leading visualization technique, and introduce $\textit{NUMAP}$, a generalizable version of UMAP powered by sep-SpectralNet. Our code will be publicly available upon acceptance.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Søren_Hauberg1
Submission Number: 4030
Loading