Score: 1

How Natural Language Proficiency Shapes GenAI Code for Software Engineering Tasks

Published: November 6, 2025 | arXiv ID: 2511.04115v1

By: Ruksit Rojpaisarnkit , Youmei Fan , Kenichi Matsumoto and more

Potential Business Impact:

Better English prompts make AI write more correct code.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With the widespread adoption of Foundation Model (FM)-powered tools in software engineering, the natural language prompt has become a critical interface between developers and Large Language Models (LLMs). While much research has focused on prompt structure, the natural language proficiency is an underexplored factor that can influence the quality of generated code. This paper investigates whether the English language proficiency itself independent of the prompting technique affects the proficiency and correctness of code generated by LLMs. Using the HumanEval dataset, we systematically varied the English proficiency of prompts from basic to advanced for 164 programming tasks and measured the resulting code proficiency and correctness. Our findings show that LLMs default to an intermediate (B2) natural language level. While the effect on the resulting code proficiency was model-dependent, we found that higher-proficiency prompts consistently yielded more correct code across all models. These results demonstrate that natural language proficiency is a key lever for controlling code generation, helping developers tailor AI output and improve the reliability of solutions.

Country of Origin
🇯🇵 Japan

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Software Engineering