Assessing and Enhancing the Robustness of LLM-based Multi-Agent Systems Through Chaos Engineering
By: Joshua Owotogbe
Potential Business Impact:
Makes smart computer teams work better and not break.
This study explores the application of chaos engineering to enhance the robustness of Large Language Model-Based Multi-Agent Systems (LLM-MAS) in production-like environments under real-world conditions. LLM-MAS can potentially improve a wide range of tasks, from answering questions and generating content to automating customer support and improving decision-making processes. However, LLM-MAS in production or preproduction environments can be vulnerable to emergent errors or disruptions, such as hallucinations, agent failures, and agent communication failures. This study proposes a chaos engineering framework to proactively identify such vulnerabilities in LLM-MAS, assess and build resilience against them, and ensure reliable performance in critical applications.
Similar Papers
Enhancing Robustness of LLM-Driven Multi-Agent Systems through Randomized Smoothing
Artificial Intelligence
Keeps AI systems from making dangerous mistakes.
Position: Towards a Responsible LLM-empowered Multi-Agent Systems
Multiagent Systems
Makes AI helpers work together safely and smartly.
LLM-Powered Fully Automated Chaos Engineering: Towards Enabling Anyone to Build Resilient Software Systems at Low Cost
Software Engineering
Makes computer systems stronger by finding and fixing problems.