Session: General
Keywords: Manifold Alignment, Guided Representation Learning, Out-of-Sample Extension, Regularized Autoencoders
TL;DR: We propose a twin autoencoder architecture that enables out-of-sample extension for semi-supervised manifold alignment, preserving joint geometric structure and facilitating cross-domain mappings without requiring full re-computation.
Abstract: Manifold alignment aims to find a shared representation, learning inter-domain relationships across multiple domains while retaining intra-domain structure within each domain. Traditional manifold alignment methods lack mechanisms for out-of-sample extension, which requires re-computation of the full embedding alignment when new data are introduced. This limitation reduces their scalability and generalizability to unseen data, posing challenges for real-world applications. To address these issues, we propose an out-of-sample extension that is generalizable to most semi-supervised manifold alignment methods. Our approach leverages a twin autoencoder architecture for multimodal learning, where each autoencoder is trained on a single modality and regularized using a pre-aligned joint embedding. This architecture enables direct out-of-sample extension of points from either modality while preserving a joint geometric structure that can facilitate cross-domain mappings. We validate our approach on bimodal datasets, demonstrating meaningful alignment preservation.
Submission Number: 67
Loading