Liquid Reasoning Transformers: A Sudoku-Based Prototype for Chess-Scale Algorithmic Tasks
By: Shivansh Sahni, Wenzhi Zhang
The Liquid Reasoning Transformer (LRT) is a transformer architecture designed for inference with adaptive depths using iterative changes, discard-based correction, and a learned stopping mechanism. Instead of relying on a single feedforward pass, the model updates a recurrent reasoning token across multiple internal steps, allowing it to correct early errors and allocate computation based on input difficulty. We evaluate the LRT on Sudoku as a controlled testbed for structured reasoning and show that it achieves strong performance, reaching 98.68% digit accuracy and 36.30% full-puzzle accuracy without using symbolic rules or search. Analyzing internal patterns shows that the discard and stop gates play different, important roles in stabilizing inferences and adjusting computational depth. We discuss how these mechanisms extend naturally to chess-scale reasoning tasks and outline extensions for multi-token reasoning and larger domains.
Similar Papers
Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning
Machine Learning (CS)
Makes AI think faster and use less power.
Lightweight Latent Reasoning for Narrative Tasks
Computation and Language
Makes AI think faster and use less power.
A Statistical Physics of Language Model Reasoning
Artificial Intelligence
Explains how AI thinks, predicts mistakes.