Hallucination to Consensus: Multi-Agent LLMs for End-to-End Test Generation
By: Qinghua Xu , Guancheng Wang , Lionel Briand and more
Potential Business Impact:
Makes computers write perfect software tests automatically.
Unit testing plays a critical role in ensuring software correctness. However, writing unit tests manually is labor-intensive, especially for strongly typed languages like Java, motivating the need for automated approaches. Traditional methods primarily rely on search-based or randomized algorithms to achieve high code coverage and produce regression oracles, which are derived from the program's current behavior rather than its intended functionality. Recent advances in LLMs have enabled oracle generation from natural language descriptions, aligning better with user requirements. However, existing LLM-based methods often require fine-tuning or rely on external tools such as EvoSuite for test prefix generation, making them costly or cumbersome to apply in practice. In this work, we propose CANDOR, a novel prompt engineering-based LLM framework for automated unit test generation in Java. CANDOR orchestrates multiple specialized LLM agents to collaboratively generate complete tests. To mitigate the notorious hallucinations in LLMs and improve oracle correctness, we introduce a novel strategy that engages multiple reasoning LLMs in a panel discussion and generates accurate oracles based on consensus. Additionally, to reduce the verbosity of reasoning LLMs' outputs, we propose a novel dual-LLM pipeline to produce concise and structured oracle evaluations. Our experiments show that CANDOR is comparable with EvoSuite in generating tests with high code coverage and clearly superior in terms of mutation score. Moreover, our prompt engineering-based approach CANDOR significantly outperforms the SOTA fine-tuning-based oracle generator TOGLL by at least 21.1 percentage points in oracle correctness on both correct and faulty source code. Further ablation studies confirm the critical contributions of key agents in generating high-quality tests.
Similar Papers
AugmenTest: Enhancing Tests with LLM-Driven Oracles
Software Engineering
Helps computers check if software works correctly.
CodeCoR: An LLM-Based Self-Reflective Multi-Agent Framework for Code Generation
Software Engineering
Makes computers write correct code by checking their own work.
Multi-Agent LLM Committees for Autonomous Software Beta Testing
Software Engineering
Helps computers find bugs in apps faster.