From Code Generation to Software Testing: AI Copilot with Context-Based RAG
By: Yuchen Wang, Shangxin Guo, Chee Wei Tan
Potential Business Impact:
Finds software bugs faster and better.
The rapid pace of large-scale software development places increasing demands on traditional testing methodologies, often leading to bottlenecks in efficiency, accuracy, and coverage. We propose a novel perspective on software testing by positing bug detection and coding with fewer bugs as two interconnected problems that share a common goal, which is reducing bugs with limited resources. We extend our previous work on AI-assisted programming, which supports code auto-completion and chatbot-powered Q&A, to the realm of software testing. We introduce Copilot for Testing, an automated testing system that synchronizes bug detection with codebase updates, leveraging context-based Retrieval Augmented Generation (RAG) to enhance the capabilities of large language models (LLMs). Our evaluation demonstrates a 31.2% improvement in bug detection accuracy, a 12.6% increase in critical test coverage, and a 10.5% higher user acceptance rate, highlighting the transformative potential of AI-driven technologies in modern software development practices.
Similar Papers
Agentic RAG for Software Testing with Hybrid Vector-Graph and Multi-Agent Orchestration
Software Engineering
Automates software testing, saving time and money.
Reinforcement Learning Integrated Agentic RAG for Software Test Cases Authoring
Software Engineering
AI learns to write better software tests.
Breaking Barriers in Software Testing: The Power of AI-Driven Automation
Software Engineering
AI finds software bugs faster and cheaper.