Using Dimensionality Reduction to Optimize t-SNE

Published: 01 Jan 2019, Last Modified: 29 Sept 2024CoRR 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: t-SNE is a popular tool for embedding multi-dimensional datasets into two or three dimensions. However, it has a large computational cost, especially when the input data has many dimensions. Many use t-SNE to embed the output of a neural network, which is generally of much lower dimension than the original data. This limits the use of t-SNE in unsupervised scenarios. We propose using \textit{random} projections to embed high dimensional datasets into relatively few dimensions, and then using t-SNE to obtain a two dimensional embedding. We show that random projections preserve the desirable clustering achieved by t-SNE, while dramatically reducing the runtime of finding the embedding.
Loading