LADDER: Self-Improving LLMs Through Recursive Problem Decomposition
By: Toby Simonds, Akira Yoshiyama
Potential Business Impact:
Teaches computers to solve harder math problems alone.
We introduce LADDER (Learning through Autonomous Difficulty-Driven Example Recursion), a framework which enables Large Language Models to autonomously improve their problem-solving capabilities through self-guided learning by recursively generating and solving progressively simpler variants of complex problems. Unlike prior approaches that require curated datasets or human feedback, LADDER leverages a model's own capabilities to generate easier question variants. We demonstrate LADDER's effectiveness in the subject of mathematical integration, improving Llama 3.2 3B's accuracy from 1% to 82% on undergraduate-level problems and enabling Qwen2.5 7B Deepseek-R1 Distilled to achieve 73% on the MIT Integration Bee qualifying examination. We also introduce TTRL (Test-Time Reinforcement Learning), where we perform reinforcement learning on variants of test problems at inference time. TTRL enables Qwen2.5 7B Deepseek-R1 Distilled to achieve a state-of-the-art score of 90% on the MIT Integration Bee qualifying examination, surpassing OpenAI o1's performance. These results show how self-directed strategic learning can achieve significant capability improvements without relying on architectural scaling or human supervision.
Similar Papers
MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer
Computation and Language
Helps computers solve math problems like humans.
RLSR: Reinforcement Learning from Self Reward
Machine Learning (CS)
AI learns to solve problems by checking its own work.
Breaking Thought Patterns: A Multi-Dimensional Reasoning Framework for LLMs
Computation and Language
Makes AI think more creatively and solve harder problems.