Contrastive Learning with Latent Tension Regularization for Tight Orbits

Published: 23 Sept 2025, Last Modified: 27 Nov 2025NeurReps 2025 ProceedingsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Self-supervised learning, Contrastive learning, Representation learning, Orbit regularization, Latent geometry, Intra-orbit variance, Tension score
TL;DR: We introduce Orbit Regularization Loss (ORL), a lightweight extension of NT-Xent that reweights negatives via a tension score. ORL adds a geometric bias for compact, transformation-consistent orbits, without extra architecture, supervision, or cost.
Abstract: In self-supervised contrastive learning, multiple augmentations of the same input naturally form a set of latent representations, or an orbit. Ideally, these representations should remain compact and directionally consistent under transformations. Standard methods such as SimCLR prioritize separating different samples but do not explicitly enforce intra-orbit coherence, allowing augmented views of the same input to drift in latent space. We propose Orbit Regularization Loss (ORL), a lightweight extension to the Normalized Temperature-scaled Cross-Entropy (NT-Xent) loss that reweights negative pairs based on a tension score - a measure of alignment between the positive-pair direction and the candidate negative’s displacement. This encourages augmented views to align along stable latent directions, reducing orbit spread without architectural changes or additional supervision. For now, ORL is aimed at improving the geometric structure of embeddings, rather than directly targeting downstream classification accuracy. Experiments on MNIST and CIFAR-10 show that ORL lowers intra-orbit variance, improves directional consistency, and yields a more coherent latent space geometry compared to the NT-Xent baseline.
Poster Pdf: pdf
Submission Number: 11
Loading