CodeAlignBench: Assessing Code Generation Models on Developer-Preferred Code Adjustments
By: Forough Mehralian , Ryan Shar , James R. Rae and more
Potential Business Impact:
Tests if AI can write code correctly.
As large language models become increasingly capable of generating code, evaluating their performance remains a complex and evolving challenge. Existing benchmarks primarily focus on functional correctness, overlooking the diversity of real-world coding tasks and developer expectations. To this end, we introduce a multi-language benchmark that evaluates LLM instruction-following capabilities and is extensible to operate on any set of standalone coding problems. Our benchmark evaluates instruction following in two key settings: adherence to pre-defined constraints specified with the initial problem, and the ability to perform refinements based on follow-up instructions. For this paper's analysis, we empirically evaluated our benchmarking pipeline with programming tasks from LiveBench, that are also automatically translated from Python into Java and JavaScript. Our automated benchmark reveals that models exhibit differing levels of performance across multiple dimensions of instruction-following. Our benchmarking pipeline provides a more comprehensive evaluation of code generation models, highlighting their strengths and limitations across languages and generation goals.
Similar Papers
Dynamic Benchmark Construction for Evaluating Large Language Models on Real-World Codes
Software Engineering
Tests AI code writing to find its mistakes.
A Multi-Dimensional Constraint Framework for Evaluating and Improving Instruction Following in Large Language Models
Computation and Language
Teaches computers to follow instructions better.
AutoCodeBench: Large Language Models are Automatic Code Benchmark Generators
Computation and Language
Makes computers write code in many languages.