HAI-Eval: Measuring Human-AI Synergy in Collaborative Coding
By: Hanjun Luo , Chiming Ni , Jiaheng Wen and more
Potential Business Impact:
Tests how well people and AI code together.
LLM-powered coding agents are reshaping the development paradigm. However, existing evaluation systems, neither traditional tests for humans nor benchmarks for LLMs, fail to capture this shift. They remain focused on well-defined algorithmic problems, which excludes problems where success depends on human-AI collaboration. Such collaborative problems not only require human reasoning to interpret complex contexts and guide solution strategies, but also demand AI efficiency for implementation. To bridge this gap, we introduce HAI-Eval, a unified benchmark designed to measure the synergy of human-AI partnership in coding. HAI-Eval's core innovation is its "Collaboration-Necessary" problem templates, which are intractable for both standalone LLMs and unaided humans, but solvable through effective collaboration. Specifically, HAI-Eval uses 45 templates to dynamically create tasks. It also provides a standardized IDE for human participants and a reproducible toolkit with 450 task instances for LLMs, ensuring an ecologically valid evaluation. We conduct a within-subject study with 45 participants and benchmark their performance against 5 state-of-the-art LLMs under 4 different levels of human intervention. Results show that standalone LLMs and unaided participants achieve poor pass rates (0.67% and 18.89%), human-AI collaboration significantly improves performance to 31.11%. Our analysis reveals an emerging co-reasoning partnership. This finding challenges the traditional human-tool hierarchy by showing that strategic breakthroughs can originate from either humans or AI. HAI-Eval establishes not only a challenging benchmark for next-generation coding agents but also a grounded, scalable framework for assessing core developer competencies in the AI era. Our benchmark and interactive demo will be openly accessible.
Similar Papers
Evaluations at Work: Measuring the Capabilities of GenAI in Use
Artificial Intelligence
Tests how well people and AI work together.
A Call for Collaborative Intelligence: Why Human-Agent Systems Should Precede AI Autonomy
Artificial Intelligence
AI helps people do jobs better, not alone.
Towards Effective Human-in-the-Loop Assistive AI Agents
CV and Pattern Recognition
AI helps people do jobs better and faster.