GenIA-E2ETest: A Generative AI-Based Approach for End-to-End Test Automation
By: Elvis Júnior , Alan Valejo , Jorge Valverde-Rebaza and more
Potential Business Impact:
Writes computer tests from plain English.
Software testing is essential to ensure system quality, but it remains time-consuming and error-prone when performed manually. Although recent advances in Large Language Models (LLMs) have enabled automated test generation, most existing solutions focus on unit testing and do not address the challenges of end-to-end (E2E) testing, which validates complete application workflows from user input to final system response. This paper introduces GenIA-E2ETest, which leverages generative AI to generate executable E2E test scripts from natural language descriptions automatically. We evaluated the approach on two web applications, assessing completeness, correctness, adaptation effort, and robustness. Results were encouraging: the scripts achieved an average of 77% for both element metrics, 82% for precision of execution, 85% for execution recall, required minimal manual adjustments (average manual modification rate of 10%), and showed consistent performance in typical web scenarios. Although some sensitivity to context-dependent navigation and dynamic content was observed, the findings suggest that GenIA-E2ETest is a practical and effective solution to accelerate E2E test automation from natural language, reducing manual effort and broadening access to automated testing.
Similar Papers
Automated Web Application Testing: End-to-End Test Case Generation with Large Language Models and Screen Transition Graphs
Software Engineering
Tests websites automatically, finding broken links and forms.
GenAI-based test case generation and execution in SDV platform
Software Engineering
Tests car software automatically from instructions.
A Study on the Improvement of Code Generation Quality Using Large Language Models Leveraging Product Documentation
Software Engineering
Makes apps work right by testing them automatically.