Score: 2

How Effective are Generative Large Language Models in Performing Requirements Classification?

Published: April 23, 2025 | arXiv ID: 2504.16768v1

By: Waad Alhoshan, Alessio Ferrari, Liping Zhao

Potential Business Impact:

Helps computers sort and understand project plans.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In recent years, transformer-based large language models (LLMs) have revolutionised natural language processing (NLP), with generative models opening new possibilities for tasks that require context-aware text generation. Requirements engineering (RE) has also seen a surge in the experimentation of LLMs for different tasks, including trace-link detection, regulatory compliance, and others. Requirements classification is a common task in RE. While non-generative LLMs like BERT have been successfully applied to this task, there has been limited exploration of generative LLMs. This gap raises an important question: how well can generative LLMs, which produce context-aware outputs, perform in requirements classification? In this study, we explore the effectiveness of three generative LLMs-Bloom, Gemma, and Llama-in performing both binary and multi-class requirements classification. We design an extensive experimental study involving over 400 experiments across three widely used datasets (PROMISE NFR, Functional-Quality, and SecReq). Our study concludes that while factors like prompt design and LLM architecture are universally important, others-such as dataset variations-have a more situational impact, depending on the complexity of the classification task. This insight can guide future model development and deployment strategies, focusing on optimising prompt structures and aligning model architectures with task-specific needs for improved performance.

Country of Origin
🇬🇧 🇮🇪 Ireland, United Kingdom

Repos / Data Links

Page Count
40 pages

Category
Computer Science:
Computation and Language