Efficient LLM Safety Evaluation through Multi-Agent Debate
By: Dachuan Lin , Guobin Shen , Zihao Yang and more
Potential Business Impact:
Makes AI safer and cheaper to test.
Safety evaluation of large language models (LLMs) increasingly relies on LLM-as-a-Judge frameworks, but the high cost of frontier models limits scalability. We propose a cost-efficient multi-agent judging framework that employs Small Language Models (SLMs) through structured debates among critic, defender, and judge agents. To rigorously assess safety judgments, we construct HAJailBench, a large-scale human-annotated jailbreak benchmark comprising 12,000 adversarial interactions across diverse attack methods and target models. The dataset provides fine-grained, expert-labeled ground truth for evaluating both safety robustness and judge reliability. Our SLM-based framework achieves agreement comparable to GPT-4o judges on HAJailBench while substantially reducing inference cost. Ablation results show that three rounds of debate yield the optimal balance between accuracy and efficiency. These findings demonstrate that structured, value-aligned debate enables SLMs to capture semantic nuances of jailbreak attacks and that HAJailBench offers a reliable foundation for scalable LLM safety evaluation.
Similar Papers
Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Artificial Intelligence
Debating computers make better judgments than voting ones.
Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-Judge
Computation and Language
Tests AI for unfairness, making it safer.
Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges
Machine Learning (CS)
Makes AI judges more honest and reliable.