More code, less validation: Risk factors for over-reliance on AI coding tools among scientists
By: Gabrielle O'Brien , Alexis Parker , Nasir Eisty and more
Potential Business Impact:
AI helps scientists write code, but they might not check it.
Programming is essential to modern scientific research, yet most scientists report inadequate training for the software development their work demands. Generative AI tools capable of code generation may support scientific programmers, but user studies indicate risks of over-reliance, particularly among inexperienced users. We surveyed 868 scientists who program, examining adoption patterns, tool preferences, and factors associated with perceived productivity. Adoption is highest among students and less experienced programmers, with variation across fields. Scientific programmers overwhelmingly prefer general-purpose conversational interfaces like ChatGPT over developer-specific tools. Both inexperience and limited use of development practices (like testing, code review, and version control) are associated with greater perceived productivity-but these factors interact, suggesting formal practices may partially compensate for inexperience. The strongest predictor of perceived productivity is the number of lines of generated code typically accepted at once. These findings suggest scientific programmers using generative AI may gauge productivity by code generation rather than validation, raising concerns about research code integrity.
Similar Papers
Developer Productivity with GenAI
Software Engineering
AI helps coders work faster, not better.
Ten Simple Rules for AI-Assisted Coding in Science
Software Engineering
Helps scientists write trustworthy computer code faster.
Ten Simple Rules for AI-Assisted Coding in Science
Software Engineering
Helps scientists use AI to write good, trustworthy code.