Code for Machines, Not Just Humans: Quantifying AI-Friendliness with Code Health Metrics
By: Markus Borg , Nadim Hagatulah , Adam Tornhill and more
Potential Business Impact:
Makes AI understand and fix computer code better.
We are entering a hybrid era in which human developers and AI coding agents work in the same codebases. While industry practice has long optimized code for human comprehension, it is increasingly important to ensure that LLMs with different capabilities can edit code reliably. In this study, we investigate the concept of ``AI-friendly code'' via LLM-based refactoring on a dataset of 5,000 Python files from competitive programming. We find a meaningful association between CodeHealth, a quality metric calibrated for human comprehension, and semantic preservation after AI refactoring. Our findings confirm that human-friendly code is also more compatible with AI tooling. These results suggest that organizations can use CodeHealth to guide where AI interventions are lower risk and where additional human oversight is warranted. Investing in maintainability not only helps humans; it also prepares for large-scale AI adoption.
Similar Papers
Human-Written vs. AI-Generated Code: A Large-Scale Study of Defects, Vulnerabilities, and Complexity
Software Engineering
AI code has more security flaws than human code.
Human and Machine: How Software Engineers Perceive and Engage with AI-Assisted Code Reviews Compared to Their Peers
Software Engineering
AI helps review computer code, but people still decide.
Coding With AI: From a Reflection on Industrial Practices to Future Computer Science and Software Engineering Education
Software Engineering
Helps coders build software faster, but needs careful checks.