ResearchCodeBench: Benchmarking LLMs on Implementing Novel Machine Learning Research Code
By: Tianyu Hua , Harper Hua , Violet Xiang and more
Potential Business Impact:
Helps computers write code from new science papers.
Large language models (LLMs) have shown promise in transforming machine learning research, yet their capability to faithfully implement novel ideas from recent research papers-ideas unseen during pretraining-remains unclear. We introduce ResearchCodeBench, a benchmark of 212 coding challenges that evaluates LLMs' ability to translate cutting-edge ML contributions from top 2024-2025 research papers into executable code. We assessed 30+ proprietary and open-source LLMs, finding that even the best models correctly implement less than 40% of the code. We find Gemini-2.5-Pro-Preview to perform best at 37.3% success rate, with O3 (High) and O4-mini (High) following behind at 32.3% and 30.8% respectively. We present empirical findings on performance comparison, contamination, and error patterns. By providing a rigorous and community-driven evaluation platform, ResearchCodeBench enables continuous understanding and advancement of LLM-driven innovation in research code generation.
Similar Papers
LLM Benchmarking with LLaMA2: Evaluating Code Development Performance Across Multiple Programming Languages
Software Engineering
AI writes code, but needs help for hard jobs.
An Experimental Study of Real-Life LLM-Proposed Performance Improvements
Software Engineering
Computers write faster code, but humans write best.
Benchmarking Large Language Models on Homework Assessment in Circuit Analysis
Computers and Society
Helps computers grade student homework accurately.