SRTD: A Symmetric Divergence for Interpretable Comparison of Representation Topology

Published: 23 Sept 2025, Last Modified: 29 Oct 2025NeurReps 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Topological Data Analysis, Representation Learning, Neural Network Analysis, Large Language Models
TL;DR: We propose Symmetric Representation Topology Divergence (SRTD), a method that addresses the asymmetry of prior work. SRTD also shows potential for use in large language model (LLM) fingerprinting.
Abstract: Representation Topology Divergence (RTD) has emerged as a powerful tool for analyzing topological differences in point clouds, especially within neural network representations. However, unlike symmetric distance metrics, two-directional divergence calculations often yield vastly different values. While we currently average these two quantities to enforce symmetry, this approach lacks a clear theoretical justification and interpretability. Its variant Max-RTD mentioned by Ilya Trofimov, has been rarely discussed or explored. Furthermore, unlike CKA, the full potential of RTD has not been thoroughly investigated and applied across various domains of machine learning, particularly in Large Language Models (LLMs). In this paper, we reveal the complementary nature of RTD and its symmetric version. We introduce a more faithful and comprehensive Symmetric Representation Topology Divergence (SRTD), enriching the interpretability of the RTD framework. We explore a series of mathematical properties for SRTD and its lightweight version SRTD-lite. Through experiments on both synthetic and real-world data, we demonstrate that SRTD and SRTD-lite outperform their one-sided divergence counterparts in terms of computational efficiency and accuracy. Additionally, by applying SRTD to compare the representation spaces of various LLMs, we showcase its strong capability in distinguishing models from different origins.
Submission Number: 31
Loading