Keywords: Self-supervised Learning, Relational Representation Learning, Analogical Reasoning, Joint-Embedding Predictive Architecture (JEPA), Relational Invariance, Objective–Regime Compatibility
Abstract: Self-supervised relational invariance emerges only when the training objective is compatible with the relational structure of the data.
Some analogy datasets contain multiple transformation families and require explicit discrimination, while others are dominated by a single transformation type in which negative samples are inherently ambiguous. Applying a single objective across these settings leads to systematic failure, even with identical architectures, as contrastive learning introduces false negatives in relation-sharing data and non-contrastive objectives collapse distinct relations when discrimination is required. We propose \textbf{Relational JEPA (R-JEPA)}, which represents transformations between paired observations as explicit relation embeddings and applies prediction directly in relation space with objectives selected according to the induced data regime. Across text-based analogy benchmarks, regime-matched training improves analogy verification, completion, and entity-disjoint transfer over state-based baselines, while mismatched objectives yield misleading geometric structure without relational invariance.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: self-supervised learning, representation learning, generalization, contrastive learning, transfer learning / domain adaptation
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 8528
Loading