InnoGym: Benchmarking the Innovation Potential of AI Agents
By: Jintian Zhang , Kewei Xu , Jingsheng Zheng and more
Potential Business Impact:
Measures how creative AI is at solving problems.
LLMs and Agents have achieved impressive progress in code generation, mathematical reasoning, and scientific discovery. However, existing benchmarks primarily measure correctness, overlooking the diversity of methods behind solutions. True innovation depends not only on producing correct answers but also on the originality of the approach. We present InnoGym, the first benchmark and framework designed to systematically evaluate the innovation potential of AI agents. InnoGym introduces two complementary metrics: performance gain, which measures improvement over the best-known solutions, and novelty, which captures methodological differences from prior approaches. The benchmark includes 18 carefully curated tasks from real-world engineering and scientific domains, each standardized through resource filtering, evaluator validation, and solution collection. In addition, we provide iGym, a unified execution environment for reproducible and long-horizon evaluations. Extensive experiments show that while some agents produce novel approaches, their lack of robustness limits performance gains. These results highlight a key gap between creativity and effectiveness, underscoring the need for benchmarks that evaluate both.
Similar Papers
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI's ability to do real science research.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI to help scientists discover new things faster.
AI Idea Bench 2025: AI Research Idea Generation Benchmark
Artificial Intelligence
Tests AI's best new ideas for science.