The Rise of Agentic Testing: Multi-Agent Systems for Robust Software Quality Assurance
By: Saba Naqvi, Mohammad Baqar, Nawaz Ali Mohammad
Potential Business Impact:
Makes software fix itself and test better.
Software testing has progressed toward intelligent automation, yet current AI-based test generators still suffer from static, single-shot outputs that frequently produce invalid, redundant, or non-executable tests due to the lack of execution aware feedback. This paper introduces an agentic multi-model testing framework a closed-loop, self-correcting system in which a Test Generation Agent, an Execution and Analysis Agent, and a Review and Optimization Agent collaboratively generate, execute, analyze, and refine tests until convergence. By using sandboxed execution, detailed failure reporting, and iterative regeneration or patching of failing tests, the framework autonomously improves test quality and expands coverage. Integrated into a CI/CD-compatible pipeline, it leverages reinforcement signals from coverage metrics and execution outcomes to guide refinement. Empirical evaluations on microservice based applications show up to a 60% reduction in invalid tests, 30% coverage improvement, and significantly reduced human effort compared to single-model baselines demonstrating that multi-agent, feedback-driven loops can evolve software testing into an autonomous, continuously learning quality assurance ecosystem for self-healing, high-reliability codebases.
Similar Papers
Testing and Enhancing Multi-Agent Systems for Robust Code Generation
Software Engineering
Fixes computer programs that write other programs.
Analyzing Code Injection Attacks on LLM-based Multi-Agent Systems in Software Development
Software Engineering
AI agents can write code, but need help to be safe.
Reinforcement Learning Integrated Agentic RAG for Software Test Cases Authoring
Software Engineering
AI learns to write better software tests.