Detecting Silent Failures in Multi-Agentic AI Trajectories
By: Divya Pathak , Harshit Kumar , Anuska Roy and more
Potential Business Impact:
Finds hidden mistakes in smart computer teams.
Multi-Agentic AI systems, powered by large language models (LLMs), are inherently non-deterministic and prone to silent failures such as drift, cycles, and missing details in outputs, which are difficult to detect. We introduce the task of anomaly detection in agentic trajectories to identify these failures and present a dataset curation pipeline that captures user behavior, agent non-determinism, and LLM variation. Using this pipeline, we curate and label two benchmark datasets comprising \textbf{4,275 and 894} trajectories from Multi-Agentic AI systems. Benchmarking anomaly detection methods on these datasets, we show that supervised (XGBoost) and semi-supervised (SVDD) approaches perform comparably, achieving accuracies up to 98% and 96%, respectively. This work provides the first systematic study of anomaly detection in Multi-Agentic AI systems, offering datasets, benchmarks, and insights to guide future research.
Similar Papers
Trajectory Guard -- A Lightweight, Sequence-Aware Model for Real-Time Anomaly Detection in Agentic AI
Machine Learning (CS)
Finds bad steps in AI plans before they cause problems.
AgenTracer: Who Is Inducing Failure in the LLM Agentic Systems?
Computation and Language
Fixes AI mistakes in complex robot teams.
AgenTracer: Who Is Inducing Failure in the LLM Agentic Systems?
Computation and Language
Finds why AI "brains" make mistakes.