Score: 1

CodeAlignBench: Assessing Code Generation Models on Developer-Preferred Code Adjustments

Published: October 31, 2025 | arXiv ID: 2510.27565v1

By: Forough Mehralian , Ryan Shar , James R. Rae and more

BigTech Affiliations: Apple

Potential Business Impact:

Tests if AI can write code correctly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models become increasingly capable of generating code, evaluating their performance remains a complex and evolving challenge. Existing benchmarks primarily focus on functional correctness, overlooking the diversity of real-world coding tasks and developer expectations. To this end, we introduce a multi-language benchmark that evaluates LLM instruction-following capabilities and is extensible to operate on any set of standalone coding problems. Our benchmark evaluates instruction following in two key settings: adherence to pre-defined constraints specified with the initial problem, and the ability to perform refinements based on follow-up instructions. For this paper's analysis, we empirically evaluated our benchmarking pipeline with programming tasks from LiveBench, that are also automatically translated from Python into Java and JavaScript. Our automated benchmark reveals that models exhibit differing levels of performance across multiple dimensions of instruction-following. Our benchmarking pipeline provides a more comprehensive evaluation of code generation models, highlighting their strengths and limitations across languages and generation goals.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Software Engineering