Keywords: JEPA, Generative Models, Latent Space Reasoning, Self-Supervised Learning, Representation Learning
TL;DR: We proposed a decoupled, generative JEPA model that autoregressively reasons in continuous latent space, offering both superior robustness to compounding error and potential for multi-threaded reasoning.
Abstract: While Joint-Embedding Predictive Architecture (JEPA) has emerged as a powerful architecture for learning rich latent representations, it fundamentally lacks generative abilities. Meanwhile, latent space reasoning attempts for Transformer models like COCONUT do improve performance, but they ultimately rely on token-by-token generation, which still accumulates compounding error and relies on context information to gain reasoning insights. To address these limitations, we propose JEPA-Reasoner, a novel JEPA model enhanced with generative ability that reasons in latent space. We augment it with a separate action-taker model, Talker, to produce human-readable sentences. Our approach demonstrates that decoupling latent space reasoning and token generation enables JEPA-Reasoner to produce mixed latent vectors that might lay the foundation for multi-threaded reasoning, while performing autoregressive generation with superior robustness to compounding error.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 18762
Loading