The Out-of-sample Extensions of t-SNE: From Gradient Descent to Fixed-point Iteration Algorithms

TMLR Paper6887 Authors

07 Jan 2026 (modified: 26 Jan 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper addresses the out-of-sample extension of the t-distributed stochastic neighbor embedding (t-SNE), namely extending the embedding to other data that were not considered in the training of the t-SNE. We demonstrate the ease of deriving the out-of-sample extension of t-SNE, thanks to the proper nature of t-SNE. Several resolution strategies are devised, from gradient descent to fixed-point iteration algorithms. Moreover, we establish several theoretical findings that allow to understand the underlying optimization mechanism of the fixed-point iteration, such as demonstrating that its repulsion-free variant corresponds to Newton's method, and providing several appealing properties, including connections with the mean shift algorithm and the resolution of the pre-image problem in Machine Learning. Experimental results on three well-known real data sets show the relevance and efficiency of the proposed out-of-sample methods, with the repulsion-free fixed-point iteration outperforming the other methods.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Dan_Garber1
Submission Number: 6887
Loading