Random Forest Autoencoders for Guided Representation Learning

Published: 23 Oct 2025, Last Modified: 28 Oct 2025LOG 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Manifold learning, Random Forest proximities, Regularized autoencoders, Semi-supervised visualization, Out-of-sample extension
TL;DR: Guided manifold learning and semi-supervised visualization with natural out-of-sample extension based on random forest proximities and diffusion geometry-regularized autoencoder architecture.
Abstract: Extensive research has produced robust methods for unsupervised data visualization. Yet supervised visualization—where expert labels guide representations—remains underexplored, as most supervised approaches prioritize classification over visualization. Recently, RF-PHATE, a diffusion-based manifold learning method leveraging random forests and information geometry, marked significant progress in supervised visualization. However, its lack of an explicit mapping function limits scalability and its application to unseen data, posing challenges for large datasets and label-scarce scenarios. To overcome these limitations, we introduce Random Forest Autoencoders (RF-AE), a neural network-based framework for out-of-sample kernel extension that combines the flexibility of autoencoders with the supervised learning strengths of random forests and the geometry captured by RF-PHATE. RF-AE enables efficient out-of-sample supervised visualization and outperforms existing methods, including RF-PHATE's standard kernel extension, in both accuracy and interpretability. Additionally, RF-AE is robust to the choice of hyperparameters and generalizes to any kernel-based dimensionality reduction method.
Supplementary Materials: zip
Submission Type: Extended abstract (max 4 main pages).
Submission Number: 9
Loading