Score: 2

Escaping the Verifier: Learning to Reason via Demonstrations

Published: November 26, 2025 | arXiv ID: 2511.21667v1

By: Locke Cai, Ivan Provilkov

BigTech Affiliations: Together AI Massachusetts Institute of Technology

Potential Business Impact:

Teaches computers to think using examples.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Training Large Language Models (LLMs) to reason often relies on Reinforcement Learning (RL) with task-specific verifiers. However, many real-world reasoning-intensive tasks lack verifiers, despite offering abundant expert demonstrations that remain under-utilized for reasoning-focused training. We introduce RARO (Relativistic Adversarial Reasoning Optimization) that learns strong reasoning capabilities from only expert demonstrations via Inverse Reinforcement Learning. Our method sets up an adversarial interaction between a policy (generator) and a relativistic critic (discriminator): the policy learns to mimic expert answers, while the critic learns to compare and distinguish between policy and expert answers. Our method trains both the policy and the critic jointly and continuously via RL, and we identify the key stabilization techniques required for robust learning. Empirically, RARO significantly outperforms strong verifier-free baselines on all of our evaluation tasks -- Countdown, DeepMath, and Poetry Writing -- and enjoys the same robust scaling trends as RL on verifiable tasks. These results demonstrate that our method effectively elicits strong reasoning performance from expert demonstrations alone, enabling robust reasoning learning even when task-specific verifiers are unavailable.

Country of Origin
🇺🇸 United States

Page Count
34 pages

Category
Computer Science:
Machine Learning (CS)