RedDebate: Safer Responses through Multi-Agent Red Teaming Debates
By: Ali Asad , Stephen Obadinma , Radin Shayanfar and more
Potential Business Impact:
AI learns to be safer by arguing with itself.
We propose RedDebate, a novel multi-agent debate framework that leverages adversarial argumentation among Large Language Models (LLMs) to proactively identify and mitigate their own unsafe behaviours. Existing AI safety methods often depend heavily on costly human evaluations or isolated single-model assessment, both subject to scalability constraints and oversight risks. RedDebate instead embraces collaborative disagreement, enabling multiple LLMs to critically examine one another's reasoning, and systematically uncovering unsafe blind spots through automated red-teaming, and iteratively improve their responses. We further integrate distinct types of long-term memory that retain learned safety insights from debate interactions. Evaluating on established safety benchmarks such as HarmBench, we demonstrate the proposed method's effectiveness. Debate alone can reduce unsafe behaviours by 17.7%, and when combined with long-term memory modules, achieves reductions exceeding 23.5%. To our knowledge, RedDebate constitutes the first fully automated framework that combines multi-agent debates with red-teaming to progressively enhance AI safety without direct human intervention.(Github Repository: https://github.com/aliasad059/RedDebate)
Similar Papers
A Red Teaming Roadmap Towards System-Level Safety
Cryptography and Security
Makes AI safer from bad people's tricks.
AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
Cryptography and Security
Finds new ways to break AI, faster and cheaper.
Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language Models
Machine Learning (CS)
AI learns to defend itself from bad questions.