Propose, Solve, Verify: Self-Play Through Formal Verification
By: Alex Wilf , Pranjal Aggarwal , Bryan Parno and more
Training models through self-play alone (without any human data) has been a longstanding goal in AI, but its effectiveness for training large language models remains unclear, particularly in code generation where rewards based on unit tests are brittle and prone to error propagation. We study self-play in the verified code generation setting, where formal verification provides reliable correctness signals. We introduce Propose, Solve, Verify (PSV) a simple self-play framework where formal verification signals are used to create a proposer capable of generating challenging synthetic problems and a solver trained via expert iteration. We use PSV to train PSV-Verus, which across three benchmarks improves pass@1 by up to 9.6x over inference-only and expert-iteration baselines. We show that performance scales with the number of generated questions and training iterations, and through ablations identify formal verification and difficulty-aware proposal as essential ingredients for successful self-play.
Similar Papers
Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVR
Computation and Language
Makes AI better at solving hard math problems.
Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVR
Computation and Language
Makes AI better at solving hard math problems.
Instantiation-based Formalization of Logical Reasoning Tasks using Language Models and Logical Solvers
Artificial Intelligence
Makes AI think more accurately and reliably.