Automatic High-Level Test Case Generation using Large Language Models
By: Navid Bin Hasan , Md. Ashraful Islam , Junaed Younus Khan and more
Potential Business Impact:
Helps computers write tests that match what businesses want.
We explored the challenges practitioners face in software testing and proposed automated solutions to address these obstacles. We began with a survey of local software companies and 26 practitioners, revealing that the primary challenge is not writing test scripts but aligning testing efforts with business requirements. Based on these insights, we constructed a use-case $\rightarrow$ (high-level) test-cases dataset to train/fine-tune models for generating high-level test cases. High-level test cases specify what aspects of the software's functionality need to be tested, along with the expected outcomes. We evaluated large language models, such as GPT-4o, Gemini, LLaMA 3.1 8B, and Mistral 7B, where fine-tuning (the latter two) yields improved performance. A final (human evaluation) survey confirmed the effectiveness of these generated test cases. Our proactive approach strengthens requirement-testing alignment and facilitates early test case generation to streamline development.
Similar Papers
Acceptance Test Generation with Large Language Models: An Industrial Case Study
Software Engineering
Helps make sure websites work right automatically.
Generating High-Level Test Cases from Requirements using LLM: An Industry Study
Software Engineering
Computers write test plans from instructions.
Test Case Generation from Bug Reports via Large Language Models: A Cognitive Layered Evaluation Framework
Software Engineering
Helps computers write better code tests.