Score: 1

DéjàQ: Open-Ended Evolution of Diverse, Learnable and Verifiable Problems

Published: January 5, 2026 | arXiv ID: 2601.01931v1

By: Willem Röpke , Samuel Coward , Andrei Lupu and more

Potential Business Impact:

Teaches computers math by making new problems.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Recent advances in reasoning models have yielded impressive results in mathematics and coding. However, most approaches rely on static datasets, which have been suggested to encourage memorisation and limit generalisation. We introduce DéjàQ, a framework that departs from this paradigm by jointly evolving a diverse set of synthetic mathematical problems alongside model training. This evolutionary process adapts to the model's ability throughout training, optimising problems for learnability. We propose two LLM-driven mutation strategies in which the model itself mutates the training data, either by altering contextual details or by directly modifying problem structure. We find that the model can generate novel and meaningful problems, and that these LLM-driven mutations improve RL training. We analyse key aspects of DéjàQ, including the validity of generated problems and computational overhead. Our results underscore the potential of dynamically evolving training data to enhance mathematical reasoning and indicate broader applicability, which we will support by open-sourcing our code.

Country of Origin
🇧🇪 🇬🇧 Belgium, United Kingdom

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)