Score: 0

A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output

Published: August 5, 2025 | arXiv ID: 2508.03922v1

By: Soroush Heydari

Potential Business Impact:

Helps AI assistants understand and help programmers better.

The rapid adoption of Artificial Intelligence(AI) programming assistants such as GitHub Copilot introduces new challenges in how these software tools address human needs. Many existing evaluation frameworks address technical aspects such as code correctness and efficiency, but often overlook crucial human factors that affect the successful integration of AI assistants in software development workflows. In this study, I analyzed GitHub Copilot's interaction with users through its chat interface, measured Copilot's ability to adapt explanations and code generation to user expertise levels, and assessed its effectiveness in facilitating collaborative programming experiences. I established a human-centered requirements framework with clear metrics to evaluate these qualities in GitHub Copilot chat. I discussed the test results and their implications for future analysis of human requirements in automated programming.

Country of Origin
🇨🇦 Canada

Page Count
8 pages

Category
Computer Science:
Software Engineering