Escaping the Verifier: Learning to Reason via Demonstrations
By: Locke Cai, Ivan Provilkov
Potential Business Impact:
Teaches computers to think using examples.
Training Large Language Models (LLMs) to reason often relies on Reinforcement Learning (RL) with task-specific verifiers. However, many real-world reasoning-intensive tasks lack verifiers, despite offering abundant expert demonstrations that remain under-utilized for reasoning-focused training. We introduce RARO (Relativistic Adversarial Reasoning Optimization) that learns strong reasoning capabilities from only expert demonstrations via Inverse Reinforcement Learning. Our method sets up an adversarial interaction between a policy (generator) and a relativistic critic (discriminator): the policy learns to mimic expert answers, while the critic learns to compare and distinguish between policy and expert answers. Our method trains both the policy and the critic jointly and continuously via RL, and we identify the key stabilization techniques required for robust learning. Empirically, RARO significantly outperforms strong verifier-free baselines on all of our evaluation tasks -- Countdown, DeepMath, and Poetry Writing -- and enjoys the same robust scaling trends as RL on verifiable tasks. These results demonstrate that our method effectively elicits strong reasoning performance from expert demonstrations alone, enabling robust reasoning learning even when task-specific verifiers are unavailable.
Similar Papers
RAVR: Reference-Answer-guided Variational Reasoning for Large Language Models
Artificial Intelligence
Helps computers learn to solve harder problems.
From Solving to Verifying: A Unified Objective for Robust Reasoning in LLMs
Machine Learning (CS)
Helps AI check its own thinking better.
Auditable-choice reframing unlocks RL-based verification for open-ended tasks
Artificial Intelligence
Makes AI better at writing stories and following instructions.