Score: 1

From Online User Feedback to Requirements: Evaluating Large Language Models for Classification and Specification Tasks

Published: October 27, 2025 | arXiv ID: 2510.23055v1

By: Manjeshwar Aniruddh Mallya , Alessio Ferrari , Mohammad Amin Zadenoori and more

Potential Business Impact:

Helps apps understand what users want better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

[Context and Motivation] Online user feedback provides valuable information to support requirements engineering (RE). However, analyzing online user feedback is challenging due to its large volume and noise. Large language models (LLMs) show strong potential to automate this process and outperform previous techniques. They can also enable new tasks, such as generating requirements specifications. [Question-Problem] Despite their potential, the use of LLMs to analyze user feedback for RE remains underexplored. Existing studies offer limited empirical evidence, lack thorough evaluation, and rarely provide replication packages, undermining validity and reproducibility. [Principal Idea-Results] We evaluate five lightweight open-source LLMs on three RE tasks: user request classification, NFR classification, and requirements specification generation. Classification performance was measured on two feedback datasets, and specification quality via human evaluation. LLMs achieved moderate-to-high classification accuracy (F1 ~ 0.47-0.68) and moderately high specification quality (mean ~ 3/5). [Contributions] We newly explore lightweight LLMs for feedback-driven requirements development. Our contributions are: (i) an empirical evaluation of lightweight LLMs on three RE tasks, (ii) a replication package, and (iii) insights into their capabilities and limitations for RE.

Country of Origin
🇮🇹 🇮🇪 Ireland, Italy

Page Count
15 pages

Category
Computer Science:
Software Engineering