DéjàQ: Open-Ended Evolution of Diverse, Learnable and Verifiable Problems
By: Willem Röpke , Samuel Coward , Andrei Lupu and more
Potential Business Impact:
Teaches computers math by making new problems.
Recent advances in reasoning models have yielded impressive results in mathematics and coding. However, most approaches rely on static datasets, which have been suggested to encourage memorisation and limit generalisation. We introduce DéjàQ, a framework that departs from this paradigm by jointly evolving a diverse set of synthetic mathematical problems alongside model training. This evolutionary process adapts to the model's ability throughout training, optimising problems for learnability. We propose two LLM-driven mutation strategies in which the model itself mutates the training data, either by altering contextual details or by directly modifying problem structure. We find that the model can generate novel and meaningful problems, and that these LLM-driven mutations improve RL training. We analyse key aspects of DéjàQ, including the validity of generated problems and computational overhead. Our results underscore the potential of dynamically evolving training data to enhance mathematical reasoning and indicate broader applicability, which we will support by open-sourcing our code.
Similar Papers
ThetaEvolve: Test-time Learning on Open Problems
Machine Learning (CS)
Helps computers discover math solutions faster.
Guided Self-Evolving LLMs with Minimal Human Supervision
Artificial Intelligence
AI learns better by teaching itself new things.
Decouple to Generalize: Context-First Self-Evolving Learning for Data-Scarce Vision-Language Reasoning
Artificial Intelligence
Teaches AI to learn from experience, not just answers.