SLR: Automated Synthesis for Scalable Logical Reasoning
By: Lukas Helff , Ahmad Omar , Felix Friedrich and more
Potential Business Impact:
Makes AI better at thinking and solving problems.
We introduce SLR, an end-to-end framework for systematic evaluation and training of Large Language Models (LLMs) via Scalable Logical Reasoning. Given a user's task specification, SLR automatically synthesizes (i) an instruction prompt for an inductive reasoning task, (ii) a validation program, executable on model outputs to provide verifiable rewards, and (iii) the latent ground-truth rule. This process is fully automated, scalable, requires no human annotations, and offers precise control over task difficulty. Using SLR, we create SLR-Bench, a benchmark comprising 19k prompts organized into 20 curriculum levels that progressively increase in relational, arithmetic, and recursive complexity. Large-scale evaluation reveals that contemporary LLMs readily produce syntactically valid rules, yet often fail at correct logical inference. Recent reasoning LLMs demonstrate improved performance but incur very high test-time computation, with costs exceeding $300 for just 1,000 prompts. Finally, curriculum learning via SLR doubles Llama-3-8B accuracy on SLR-Bench, achieving parity with Gemini-Flash-Thinking at a fraction of computational cost. Moreover, these reasoning capabilities generalize to a wide range of established benchmarks, underscoring the effectiveness of SLR for downstream reasoning.
Similar Papers
Structured Reasoning for Large Language Models
Computation and Language
Makes AI think smarter, faster, and shorter.
SSR: Socratic Self-Refine for Large Language Model Reasoning
Computation and Language
Makes AI think better, step by step.
An Explicit Syllogistic Legal Reasoning Framework for Large Language Models
Computation and Language
Helps computers make fair legal decisions.