Latent Planning via Embedding Arithmetic: A Contrastive Approach to Strategic Reasoning
By: Andrew Hamara , Greg Hamerly , Pablo Rivas and more
Potential Business Impact:
Teaches computers to plan moves in games like chess.
Planning in high-dimensional decision spaces is increasingly being studied through the lens of learned representations. Rather than training policies or value heads, we investigate whether planning can be carried out directly in an evaluation-aligned embedding space. We introduce SOLIS, which learns such a space using supervised contrastive learning. In this representation, outcome similarity is captured by proximity, and a single global advantage vector orients the space from losing to winning regions. Candidate actions are then ranked according to their alignment with this direction, reducing planning to vector operations in latent space. We demonstrate this approach in chess, where SOLIS uses only a shallow search guided by the learned embedding to reach competitive strength under constrained conditions. More broadly, our results suggest that evaluation-aligned latent planning offers a lightweight alternative to traditional dynamics models or policy learning.
Similar Papers
Learning to Plan via Supervised Contrastive Learning and Strategic Interpolation: A Chess Case Study
CV and Pattern Recognition
Teaches computers to play chess with intuition.
Native Logical and Hierarchical Representations with Subspace Embeddings
Machine Learning (CS)
Computers understand words and their meanings better.
DeepPlanner: Scaling Planning Capability for Deep Research Agents via Advantage Shaping
Artificial Intelligence
Helps smart computer programs plan better.