Model Transferability Informed by Embedding’s Topology

Published: 23 Sept 2025, Last Modified: 27 Nov 2025NeurReps 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Topological Data Analysis, Persistent Homology, Model Selection, Representation Learning, Fine-Tuning, Neural Collapse
TL;DR: We propose a transferability score based on the persistent homology of feature embeddings, which effectively ranks pre-trained models by measuring their cluster separation and compactness.
Abstract: In this work, we tackle the challenge of predicting the performance of a pre-trained classification model on a downstream task before fine-tuning. Our approach leverages the geometric information encoded in the feature embeddings of pre-trained networks, which we analyze using persistent diagrams generated from a Vietoris-Rips filtration. We find that during late-stage training, the separation between the highest-persistence features and the remaining low-persistence features mirrors the dynamics of neural collapse. However, our topological measures differ significantly during early training as the geometrical structure of the embeddings stabilizes. We propose a transferability score based on the ratio of these topological features. We evaluated its performance in ranking models for fine-tuning and showed that it achieves competitive results against established methods.
Poster Pdf: pdf
Submission Number: 94
Loading