Unifying Regression and Uncertainty Quantification with Contrastive Spectral Representation Learning

Published: 23 Sept 2025, Last Modified: 29 Oct 2025NeurReps 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: equivariant representation learning, contrastive learning, geometric deep learning, harmonic analysis, uncertainty quantification, conditional density estimation
TL;DR: We show preliminary empirical results on a contrastive representation learning algorithm for training symmetry-agnostic and equivariant NN for regression taks with uncertainty quantification.
Abstract: In this work, we discuss a contrastive representation learning framework, called NCP, which introduces a new paradigm for training deep NN architectures for regression, enabling high-quality regression estimates _and_ parametric uncertainty quantification without retraining or restrictive assumptions on the uncertainty distribution. NCP learns high-dimensional data representations that are linearly transferable to regression and uncertainty quantification tasks, backed by non-asymptotic statistical learning guarantees linking representation quality to downstream performance. Crucially, in equivariant regression contexts, the NCP framework can be adapted to train any geometric deep learning architecture, resulting in a _disentangled_ equivariant representation learning algorithm with first-of-its-kind statistical guarantees for equivariant regression and symmetry-aware uncertainty quantification.
Submission Number: 20
Loading