A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output
By: Soroush Heydari
Potential Business Impact:
Helps AI assistants understand and help programmers better.
The rapid adoption of Artificial Intelligence(AI) programming assistants such as GitHub Copilot introduces new challenges in how these software tools address human needs. Many existing evaluation frameworks address technical aspects such as code correctness and efficiency, but often overlook crucial human factors that affect the successful integration of AI assistants in software development workflows. In this study, I analyzed GitHub Copilot's interaction with users through its chat interface, measured Copilot's ability to adapt explanations and code generation to user expertise levels, and assessed its effectiveness in facilitating collaborative programming experiences. I established a human-centered requirements framework with clear metrics to evaluate these qualities in GitHub Copilot chat. I discussed the test results and their implications for future analysis of human requirements in automated programming.
Similar Papers
"My productivity is boosted, but ..." Demystifying Users' Perception on AI Coding Assistants
Software Engineering
Helps AI write better code for programmers.
The Effects of GitHub Copilot on Computing Students' Programming Effectiveness, Efficiency, and Processes in Brownfield Programming Tasks
Software Engineering
Helps students code new features in old programs faster.
GitHub's Copilot Code Review: Can AI Spot Security Flaws Before You Commit?
Software Engineering
AI code checker misses big security problems.