JEPA-Reasoner: Decoupling Latent Reasoning from Token Generation
By: Bingyang Kelvin Liu, Ziyu Patrick Chen
Potential Business Impact:
AI learns to think and talk better.
While Joint-Embedding Predictive Architecture (JEPA) has emerged as a powerful architecture for learning rich latent representations, it fundamentally lacks generative abilities. Meanwhile, latent space reasoning attempts for Transformer models like COCONUT do improve performance, but they ultimately rely on token-by-token generation, which still accumulates compounding error and relies on context information to gain reasoning insights. To address these limitations, we propose JEPA-Reasoner, a novel JEPA model enhanced with generative ability that reasons in latent space. We augment it with a separate action-taker model, Talker, to produce human-readable sentences. Our approach demonstrates that decoupling latent space reasoning and token generation enables JEPA-Reasoner to produce mixed latent vectors that might lay the foundation for multi-threaded reasoning, while performing autoregressive generation with superior robustness to compounding error.
Similar Papers
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Computation and Language
Makes computers think faster, using less power.
Lightweight Latent Reasoning for Narrative Tasks
Computation and Language
Makes AI think faster and use less power.
Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs
CV and Pattern Recognition
Guides AI to think in different ways for better answers.