AInsteinBench: Benchmarking Coding Agents on Scientific Repositories
By: Titouan Duston , Shuo Xin , Yang Sun and more
We introduce AInsteinBench, a large-scale benchmark for evaluating whether large language model (LLM) agents can operate as scientific computing development agents within real research software ecosystems. Unlike existing scientific reasoning benchmarks which focus on conceptual knowledge, or software engineering benchmarks that emphasize generic feature implementation and issue resolving, AInsteinBench evaluates models in end-to-end scientific development settings grounded in production-grade scientific repositories. The benchmark consists of tasks derived from maintainer-authored pull requests across six widely used scientific codebases, spanning quantum chemistry, quantum computing, molecular dynamics, numerical relativity, fluid dynamics, and cheminformatics. All benchmark tasks are carefully curated through multi-stage filtering and expert review to ensure scientific challenge, adequate test coverage, and well-calibrated difficulty. By leveraging evaluation in executable environments, scientifically meaningful failure modes, and test-driven verification, AInsteinBench measures a model's ability to move beyond surface-level code generation toward the core competencies required for computational scientific research.
Similar Papers
AstaBench: Rigorous Benchmarking of AI Agents with a Scientific Research Suite
Artificial Intelligence
Tests AI's ability to do science research.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI's ability to do real science research.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI to help scientists discover new things faster.