UnitTenX: Generating Tests for Legacy Packages with AI Agents Powered by Formal Verification
By: Yiannis Charalambous , Claudionor N. Coelho Jr , Luis Lamb and more
Potential Business Impact:
Makes old computer code work better and safer.
This paper introduces UnitTenX, a state-of-the-art open-source AI multi-agent system designed to generate unit tests for legacy code, enhancing test coverage and critical value testing. UnitTenX leverages a combination of AI agents, formal methods, and Large Language Models (LLMs) to automate test generation, addressing the challenges posed by complex and legacy codebases. Despite the limitations of LLMs in bug detection, UnitTenX offers a robust framework for improving software reliability and maintainability. Our results demonstrate the effectiveness of this approach in generating high-quality tests and identifying potential issues. Additionally, our approach enhances the readability and documentation of legacy code.
Similar Papers
Large Language Models for Unit Test Generation: Achievements, Challenges, and the Road Ahead
Software Engineering
Helps computers write better code tests automatically.
HPCAgentTester: A Multi-Agent LLM Approach for Enhanced HPC Unit Test Generation
Distributed, Parallel, and Cluster Computing
Tests super-fast computer programs automatically.
TENET: Leveraging Tests Beyond Validation for Code Generation
Software Engineering
Helps AI write better code by testing it.